Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn...Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.展开更多
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa...Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community.展开更多
Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Prof...Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations.展开更多
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime...Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.展开更多
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in...For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.展开更多
Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning ...Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.展开更多
One of the most obvious markers of an individual’s identity is their language.However,in some ways,the relationship between identity and language acquisition seems to be missed.In fact,a lot of studies have shown tha...One of the most obvious markers of an individual’s identity is their language.However,in some ways,the relationship between identity and language acquisition seems to be missed.In fact,a lot of studies have shown that identity may influence the reasons behind language acquisition,especially in bilingual or multilingual societies.Learning English can provide EFL(English as a Foreign Language)students with a good sense of identity and encourage them to practice their agency,which can improve learning efficiency and effectiveness.This paper examines how the efficacy and results of language learning are influenced by the learner’s identity.展开更多
With advancements in computing powers and the overall quality of images captured on everyday cameras,a much wider range of possibilities has opened in various scenarios.This fact has several implications for deaf and ...With advancements in computing powers and the overall quality of images captured on everyday cameras,a much wider range of possibilities has opened in various scenarios.This fact has several implications for deaf and dumb people as they have a chance to communicate with a greater number of people much easier.More than ever before,there is a plethora of info about sign language usage in the real world.Sign languages,and by extension the datasets available,are of two forms,isolated sign language and continuous sign language.The main difference between the two types is that in isolated sign language,the hand signs cover individual letters of the alphabet.In continuous sign language,entire words’hand signs are used.This paper will explore a novel deep learning architecture that will use recently published large pre-trained image models to quickly and accurately recognize the alphabets in the American Sign Language(ASL).The study will focus on isolated sign language to demonstrate that it is possible to achieve a high level of classification accuracy on the data,thereby showing that interpreters can be implemented in the real world.The newly proposed Mobile-NetV2 architecture serves as the backbone of this study.It is designed to run on end devices like mobile phones and infer signals(what does it infer)from images in a relatively short amount of time.With the proposed architecture in this paper,the classification accuracy of 98.77%in the Indian Sign Language(ISL)and American Sign Language(ASL)is achieved,outperforming the existing state-of-the-art systems.展开更多
The revolutionary online application ChatGPT has brought immense concerns to the education field.Foreign language teachers being some of those most reliant on writing assessments were among the most anxious,exacerbate...The revolutionary online application ChatGPT has brought immense concerns to the education field.Foreign language teachers being some of those most reliant on writing assessments were among the most anxious,exacerbated by the extensive media coverage about the much-fantasized functionality of the chatbot.Hence,the article starts by elucidating the mechanisms,functions and common misconceptions about ChatGPT.Issues and risks associated with its usage are discussed,followed by an in-depth discussion of how the chatbot can be harnessed by learners and teachers.It is argued that ChatGPT offers major opportunities for teachers and education institutes to improve second/foreign language teaching and assessments,which similarly provided researchers with an array of research opportunities,especially towards a more personalized learning experience.展开更多
This study presents a novel and innovative approach to auto-matically translating Arabic Sign Language(ATSL)into spoken Arabic.The proposed solution utilizes a deep learning-based classification approach and the trans...This study presents a novel and innovative approach to auto-matically translating Arabic Sign Language(ATSL)into spoken Arabic.The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models.The image-based translation method maps sign language gestures to corre-sponding letters or words using distance measures and classification as a machine learning technique.The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs,with a translation accuracy of 93.7%.This research makes a significant contribution to the field of ATSL.It offers a practical solution for improving communication for individuals with special needs,such as the deaf and mute community.This work demonstrates the potential of deep learning techniques in translating sign language into natural language and highlights the importance of ATSL in facilitating communication for individuals with disabilities.展开更多
The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models...The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.展开更多
Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enha...Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enhanced outcomes.But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks.This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning(ASLGC-DHOML)model.The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures.The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature vectors using the densely connected network(DenseNet169)model.For gesture recognition and classification,a multilayer perceptron(MLP)classifier is exploited to recognize and classify the existence of sign language gestures.Lastly,the DHO algorithm is utilized for parameter optimization of the MLP model.The experimental results of the ASLGC-DHOML model are tested and the outcomes are inspected under distinct aspects.The comparison analysis highlighted that the ASLGC-DHOML method has resulted in enhanced gesture classification results than other techniques with maximum accuracy of 92.88%.展开更多
Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier...Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches.展开更多
Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The...Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The inter-action via Sign language becomes a fruitful means of communication for hearing and speech impaired persons.A Hand gesture recognition systemfinds helpful for deaf and dumb people by making use of human computer interface(HCI)and convolutional neural networks(CNN)for identifying the static indications of Indian Sign Language(ISL).This study introduces a shark smell optimization with deep learning based automated sign language recognition(SSODL-ASLR)model for hearing and speaking impaired people.The presented SSODL-ASLR technique majorly concentrates on the recognition and classification of sign lan-guage provided by deaf and dumb people.The presented SSODL-ASLR model encompasses a two stage process namely sign language detection and sign lan-guage classification.In thefirst stage,the Mask Region based Convolution Neural Network(Mask RCNN)model is exploited for sign language recognition.Sec-ondly,SSO algorithm with soft margin support vector machine(SM-SVM)model can be utilized for sign language classification.To assure the enhanced classifica-tion performance of the SSODL-ASLR model,a brief set of simulations was car-ried out.The extensive results portrayed the supremacy of the SSODL-ASLR model over other techniques.展开更多
Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on...Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment.展开更多
Language teaching is not a one-way process.It interacts with language learning in an extremely intricate way.To improve language teaching,we need to take the process of language learning into account.This paper tries ...Language teaching is not a one-way process.It interacts with language learning in an extremely intricate way.To improve language teaching,we need to take the process of language learning into account.This paper tries to explore and understand what strategies the second language learners consciously or subconsciously adopt during their language learning process through the analyses of the linguistic errors they commit,so as to provide some insights into language teaching practice.展开更多
Mobile-assisted language learning(MALL)has been regarded as an excellent tool in the field of language acquisition as modern technology develops.The popularity of using mobile devices in education makes it possible fo...Mobile-assisted language learning(MALL)has been regarded as an excellent tool in the field of language acquisition as modern technology develops.The popularity of using mobile devices in education makes it possible for people to learn languages through platforms like tablets and smartphones from anywhere and at any time.However,the application of MALL in a collaborative student-centered environment has received comparatively little attention.Since spoken fluency and vocabulary size are the two crucial components of language proficiency,this study aims to investigate if young learners can improve their native language level through learning online beyond the classroom.The quantitative data reveal that MALL does make a difference in students’linguistic skills.The results show that the incorporation of mobile applications into language learning could better help learners achieve the learning outcomes and improve their communication skills than simply using conventional methods.In addition,the data of the questionnaire exposed some issues that need to be continuously improved.The viable suggestions are also discussed to share ideas about building a more sustainable learning environment in a data-driven age.展开更多
In response to the significant impact of the widespread use of digital devices and mobile technologies on language teaching and learning in this time of Internet information technology,this study aims to investigate t...In response to the significant impact of the widespread use of digital devices and mobile technologies on language teaching and learning in this time of Internet information technology,this study aims to investigate the effectiveness of Mobile-Assisted Language Learning(MALL)in enhancing the English proficiency of students,while exploring the potential advantages of mobile devices for assisted learning in the English learning environment in China and the potential for mobile applications to assist English learning to foster learner autonomy.Anchored with the design thinking approach,the researchers used the empirical analysis methodology in developing an efficient mobile-assisted language learning model.Usability testing was conducted using a case study of two mobile applications,WeLearn and Flipped English,in Heilongjiang University of Finance and Economics to measure the extent of usability and acceptability of MALL on English language acquisition among college students identified through surveys,interviews,and quantitative assessments.Mobile technology is a perfect tool for every student that enhances their experience and increases their joy while improving their English language skills.It adds new value and brings new opportunities for both English learners and the language education industry.Indeed,MALL is English learners’new ally.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
Native language has been rejected for a long time in the foreign language class, which results from a misunderstanding to native language transfer by most teachers. Actually, it is an effective learning strategy to co...Native language has been rejected for a long time in the foreign language class, which results from a misunderstanding to native language transfer by most teachers. Actually, it is an effective learning strategy to complete the communication task under the help of native language, which should be acknowledged. Using native language to explain specific words or grammar rules can take advantage of the limited class-time efficiently and increase the class efficiency; using native language to discuss teaching method and solve the problems for the students can promote their enthusiasm. In these specific teaching processes, native language is an important teaching resource. Instances identify that native language has positive effects in the foreign language learning and teaching and it should have its own standpoint.展开更多
基金This Research is funded by Researchers Supporting Project Number(RSPD2024R947),King Saud University,Riyadh,Saudi Arabia.
文摘Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.
基金supported from the National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community.
文摘Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations.
文摘Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.
基金supported by the National Natural Science Foundation of China (62173333, 12271522)Beijing Natural Science Foundation (Z210002)the Research Fund of Renmin University of China (2021030187)。
文摘For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.
基金supported by the National Natural Science Foundation of China(U21A20166)in part by the Science and Technology Development Foundation of Jilin Province (20230508095RC)+1 种基金in part by the Development and Reform Commission Foundation of Jilin Province (2023C034-3)in part by the Exploration Foundation of State Key Laboratory of Automotive Simulation and Control。
文摘Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.
文摘One of the most obvious markers of an individual’s identity is their language.However,in some ways,the relationship between identity and language acquisition seems to be missed.In fact,a lot of studies have shown that identity may influence the reasons behind language acquisition,especially in bilingual or multilingual societies.Learning English can provide EFL(English as a Foreign Language)students with a good sense of identity and encourage them to practice their agency,which can improve learning efficiency and effectiveness.This paper examines how the efficacy and results of language learning are influenced by the learner’s identity.
文摘With advancements in computing powers and the overall quality of images captured on everyday cameras,a much wider range of possibilities has opened in various scenarios.This fact has several implications for deaf and dumb people as they have a chance to communicate with a greater number of people much easier.More than ever before,there is a plethora of info about sign language usage in the real world.Sign languages,and by extension the datasets available,are of two forms,isolated sign language and continuous sign language.The main difference between the two types is that in isolated sign language,the hand signs cover individual letters of the alphabet.In continuous sign language,entire words’hand signs are used.This paper will explore a novel deep learning architecture that will use recently published large pre-trained image models to quickly and accurately recognize the alphabets in the American Sign Language(ASL).The study will focus on isolated sign language to demonstrate that it is possible to achieve a high level of classification accuracy on the data,thereby showing that interpreters can be implemented in the real world.The newly proposed Mobile-NetV2 architecture serves as the backbone of this study.It is designed to run on end devices like mobile phones and infer signals(what does it infer)from images in a relatively short amount of time.With the proposed architecture in this paper,the classification accuracy of 98.77%in the Indian Sign Language(ISL)and American Sign Language(ASL)is achieved,outperforming the existing state-of-the-art systems.
文摘The revolutionary online application ChatGPT has brought immense concerns to the education field.Foreign language teachers being some of those most reliant on writing assessments were among the most anxious,exacerbated by the extensive media coverage about the much-fantasized functionality of the chatbot.Hence,the article starts by elucidating the mechanisms,functions and common misconceptions about ChatGPT.Issues and risks associated with its usage are discussed,followed by an in-depth discussion of how the chatbot can be harnessed by learners and teachers.It is argued that ChatGPT offers major opportunities for teachers and education institutes to improve second/foreign language teaching and assessments,which similarly provided researchers with an array of research opportunities,especially towards a more personalized learning experience.
文摘This study presents a novel and innovative approach to auto-matically translating Arabic Sign Language(ATSL)into spoken Arabic.The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models.The image-based translation method maps sign language gestures to corre-sponding letters or words using distance measures and classification as a machine learning technique.The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs,with a translation accuracy of 93.7%.This research makes a significant contribution to the field of ATSL.It offers a practical solution for improving communication for individuals with special needs,such as the deaf and mute community.This work demonstrates the potential of deep learning techniques in translating sign language into natural language and highlights the importance of ATSL in facilitating communication for individuals with disabilities.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the|Deanship of Scientific Research at Umm Al-Qura University|for supporting this work by Grant Code:(22UQU4310373DSR33).
文摘The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia+1 种基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura Universitysupporting this work by Grant Code:22UQU4310373DSR54.
文摘Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enhanced outcomes.But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks.This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning(ASLGC-DHOML)model.The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures.The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature vectors using the densely connected network(DenseNet169)model.For gesture recognition and classification,a multilayer perceptron(MLP)classifier is exploited to recognize and classify the existence of sign language gestures.Lastly,the DHO algorithm is utilized for parameter optimization of the MLP model.The experimental results of the ASLGC-DHOML model are tested and the outcomes are inspected under distinct aspects.The comparison analysis highlighted that the ASLGC-DHOML method has resulted in enhanced gesture classification results than other techniques with maximum accuracy of 92.88%.
基金supported through the Annual Funding track by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia[Project No.AN000685].
文摘Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches.
文摘Sign language is mainly utilized in communication with people who have hearing disabilities.Sign language is used to communicate with people hav-ing developmental impairments who have some or no interaction skills.The inter-action via Sign language becomes a fruitful means of communication for hearing and speech impaired persons.A Hand gesture recognition systemfinds helpful for deaf and dumb people by making use of human computer interface(HCI)and convolutional neural networks(CNN)for identifying the static indications of Indian Sign Language(ISL).This study introduces a shark smell optimization with deep learning based automated sign language recognition(SSODL-ASLR)model for hearing and speaking impaired people.The presented SSODL-ASLR technique majorly concentrates on the recognition and classification of sign lan-guage provided by deaf and dumb people.The presented SSODL-ASLR model encompasses a two stage process namely sign language detection and sign lan-guage classification.In thefirst stage,the Mask Region based Convolution Neural Network(Mask RCNN)model is exploited for sign language recognition.Sec-ondly,SSO algorithm with soft margin support vector machine(SM-SVM)model can be utilized for sign language classification.To assure the enhanced classifica-tion performance of the SSODL-ASLR model,a brief set of simulations was car-ried out.The extensive results portrayed the supremacy of the SSODL-ASLR model over other techniques.
基金supported by Science and Technology Research Project of Jiangxi Education Department.Project Grant No.GJJ2203306.
文摘Multimodal sentiment analysis is an essential area of research in artificial intelligence that combines multiple modes,such as text and image,to accurately assess sentiment.However,conventional approaches that rely on unimodal pre-trained models for feature extraction from each modality often overlook the intrinsic connections of semantic information between modalities.This limitation is attributed to their training on unimodal data,and necessitates the use of complex fusion mechanisms for sentiment analysis.In this study,we present a novel approach that combines a vision-language pre-trained model with a proposed multimodal contrastive learning method.Our approach harnesses the power of transfer learning by utilizing a vision-language pre-trained model to extract both visual and textual representations in a unified framework.We employ a Transformer architecture to integrate these representations,thereby enabling the capture of rich semantic infor-mation in image-text pairs.To further enhance the representation learning of these pairs,we introduce our proposed multimodal contrastive learning method,which leads to improved performance in sentiment analysis tasks.Our approach is evaluated through extensive experiments on two publicly accessible datasets,where we demonstrate its effectiveness.We achieve a significant improvement in sentiment analysis accuracy,indicating the supe-riority of our approach over existing techniques.These results highlight the potential of multimodal sentiment analysis and underscore the importance of considering the intrinsic semantic connections between modalities for accurate sentiment assessment.
文摘Language teaching is not a one-way process.It interacts with language learning in an extremely intricate way.To improve language teaching,we need to take the process of language learning into account.This paper tries to explore and understand what strategies the second language learners consciously or subconsciously adopt during their language learning process through the analyses of the linguistic errors they commit,so as to provide some insights into language teaching practice.
基金Youth Project of Philosophy and Social Science Planning of Henan Province“Study on the Influence of Foreign Language Learning on Chinese Students’Purity of Chinese Language”(Grant No.2023CYY038)General Project of Annual Research Topics of Henan Provincial Federation of Social Science and Technology“Study on Formation Factors and Cognitive Mechanisms of Network Harmonies and Chinese-English Mixed-Up Idioms”(Grant No.SKL-2023-1588).
文摘Mobile-assisted language learning(MALL)has been regarded as an excellent tool in the field of language acquisition as modern technology develops.The popularity of using mobile devices in education makes it possible for people to learn languages through platforms like tablets and smartphones from anywhere and at any time.However,the application of MALL in a collaborative student-centered environment has received comparatively little attention.Since spoken fluency and vocabulary size are the two crucial components of language proficiency,this study aims to investigate if young learners can improve their native language level through learning online beyond the classroom.The quantitative data reveal that MALL does make a difference in students’linguistic skills.The results show that the incorporation of mobile applications into language learning could better help learners achieve the learning outcomes and improve their communication skills than simply using conventional methods.In addition,the data of the questionnaire exposed some issues that need to be continuously improved.The viable suggestions are also discussed to share ideas about building a more sustainable learning environment in a data-driven age.
文摘In response to the significant impact of the widespread use of digital devices and mobile technologies on language teaching and learning in this time of Internet information technology,this study aims to investigate the effectiveness of Mobile-Assisted Language Learning(MALL)in enhancing the English proficiency of students,while exploring the potential advantages of mobile devices for assisted learning in the English learning environment in China and the potential for mobile applications to assist English learning to foster learner autonomy.Anchored with the design thinking approach,the researchers used the empirical analysis methodology in developing an efficient mobile-assisted language learning model.Usability testing was conducted using a case study of two mobile applications,WeLearn and Flipped English,in Heilongjiang University of Finance and Economics to measure the extent of usability and acceptability of MALL on English language acquisition among college students identified through surveys,interviews,and quantitative assessments.Mobile technology is a perfect tool for every student that enhances their experience and increases their joy while improving their English language skills.It adds new value and brings new opportunities for both English learners and the language education industry.Indeed,MALL is English learners’new ally.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
文摘Native language has been rejected for a long time in the foreign language class, which results from a misunderstanding to native language transfer by most teachers. Actually, it is an effective learning strategy to complete the communication task under the help of native language, which should be acknowledged. Using native language to explain specific words or grammar rules can take advantage of the limited class-time efficiently and increase the class efficiency; using native language to discuss teaching method and solve the problems for the students can promote their enthusiasm. In these specific teaching processes, native language is an important teaching resource. Instances identify that native language has positive effects in the foreign language learning and teaching and it should have its own standpoint.