As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in...As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.展开更多
The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classificatio...The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions.展开更多
The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models...The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.展开更多
This paper attempts to approach the interface of a robot from the perspective of virtual assistants.Virtual assistants can also be characterized as the mind of a robot,since they manage communication and action with t...This paper attempts to approach the interface of a robot from the perspective of virtual assistants.Virtual assistants can also be characterized as the mind of a robot,since they manage communication and action with the rest of the world they exist in.Therefore,virtual assistants can also be described as the brain of a robot and they include a Natural Language Processing(NLP)module for conducting communication in their human-robot interface.This work is focused on inquiring and enhancing the capabilities of this module.The problem is that nothing much is revealed about the nature of the human-robot interface of commercial virtual assistants.Therefore,any new attempt of developing such a capability has to start from scratch.Accordingly,to include corresponding capabilities to a developing NLP system of a virtual assistant,a method of systemic semantic modelling is proposed and applied.For this purpose,the paper briefly reviews the evolution of virtual assistants from the first assistant,in the form of a game,to the latest assistant that has significantly elevated their standards.Then there is a reference to the evolution of their services and their continued offerings,as well as future expectations.The paper presents their structure and the technologies used,according to the data provided by the development companies to the public,while an attempt is made to classify virtual assistants,based on their characteristics and capabilities.Consequently,a robotic NLP interface is being developed,based on the communicative power of a proposed systemic conceptual model that may enhance the NLP capabilities of virtual assistants,being tested through a small natural language dictionary in Greek.展开更多
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime...Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.展开更多
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn...Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.展开更多
A variety of neural networks have been presented to deal with issues in deep learning in the last decades.Despite the prominent success achieved by the neural network,it still lacks theoretical guidance to design an e...A variety of neural networks have been presented to deal with issues in deep learning in the last decades.Despite the prominent success achieved by the neural network,it still lacks theoretical guidance to design an efficient neural network model,and verifying the performance of a model needs excessive resources.Previous research studies have demonstrated that many existing models can be regarded as different numerical discretizations of differential equations.This connection sheds light on designing an effective recurrent neural network(RNN)by resorting to numerical analysis.Simple RNN is regarded as a discretisation of the forward Euler scheme.Considering the limited solution accuracy of the forward Euler methods,a Taylor‐type discrete scheme is presented with lower truncation error and a Taylor‐type RNN(T‐RNN)is designed with its guidance.Extensive experiments are conducted to evaluate its performance on statistical language models and emotion analysis tasks.The noticeable gains obtained by T‐RNN present its superiority and the feasibility of designing the neural network model using numerical methods.展开更多
Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier...Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches.展开更多
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ...One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.展开更多
This work is about the progress of previous related work based on an experiment to improve the intelligence of robotic systems,with the aim of achieving more linguistic communication capabilities between humans and ro...This work is about the progress of previous related work based on an experiment to improve the intelligence of robotic systems,with the aim of achieving more linguistic communication capabilities between humans and robots.In this paper,the authors attempt an algorithmic approach to natural language generation through hole semantics and by applying the OMAS-III computational model as a grammatical formalism.In the original work,a technical language is used,while in the later works,this has been replaced by a limited Greek natural language dictionary.This particular effort was made to give the evolving system the ability to ask questions,as well as the authors developed an initial dialogue system using these techniques.The results show that the use of these techniques the authors apply can give us a more sophisticated dialogue system in the future.展开更多
In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connec...In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.展开更多
Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly...Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.展开更多
Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis envir...Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis environment based on Python language, and built a corpus based on the core chapters of SESD. The second step was to digitalize the corpus. The main steps included word segmentation, information cleaning and merging, document-entry matrix, dictionary compilation and information conversion. The third step was to mine and display the internal information of SESD corpus by means of word cloud, keyword extraction and visualization. Results NLP played a positive role in computer recognition and comprehension of SESD. Different chapters had different keywords and weights. Deficiency syndrome elements were an important component of SESD, such as "Qi deficiency""Yang deficiency" and "Yin deficiency". The important syndrome elements of substantiality included "Blood stasis""Qi stagnation", etc. Core syndrome elements were closely related. Conclusions Syndrome differentiation and treatment was the core of SESD. Using NLP to excavate syndromes differentiation could help reveal the internal relationship between syndromes differentiation and provide basis for artificial intelligence to learn syndromes differentiation.展开更多
COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of en...COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
The recent advancements made in World Wide Web and social networking have eased the spread of fake news among people at a faster rate.At most of the times,the intention of fake news is to misinform the people and make...The recent advancements made in World Wide Web and social networking have eased the spread of fake news among people at a faster rate.At most of the times,the intention of fake news is to misinform the people and make manipulated societal insights.The spread of low-quality news in social networking sites has a negative influence upon people as well as the society.In order to overcome the ever-increasing dissemination of fake news,automated detection models are developed using Artificial Intelligence(AI)and Machine Learning(ML)methods.The latest advancements in Deep Learning(DL)models and complex Natural Language Processing(NLP)tasks make the former,a significant solution to achieve Fake News Detection(FND).In this background,the current study focuses on design and development of Natural Language Processing with Sea Turtle Foraging Optimizationbased Deep Learning Technique for Fake News Detection and Classification(STODL-FNDC)model.The aim of the proposed STODL-FNDC model is to discriminate fake news from legitimate news in an effectual manner.In the proposed STODL-FNDC model,the input data primarily undergoes pre-processing and Glove-based word embedding.Besides,STODL-FNDC model employs Deep Belief Network(DBN)approach for detection as well as classification of fake news.Finally,STO algorithm is utilized after adjusting the hyperparameters involved in DBN model,in an optimal manner.The novelty of the study lies in the design of STO algorithm with DBN model for FND.In order to improve the detection performance of STODL-FNDC technique,a series of simulations was carried out on benchmark datasets.The experimental outcomes established the better performance of STODL-FNDC approach over other methods with a maximum accuracy of 95.50%.展开更多
Student mobility or academic mobility involves students moving between institutions during their post-secondary education,and one of the challenging tasks in this process is to assess the transfer credits to be offere...Student mobility or academic mobility involves students moving between institutions during their post-secondary education,and one of the challenging tasks in this process is to assess the transfer credits to be offered to the incoming student.In general,this process involves domain experts comparing the learning outcomes of the courses,to decide on offering transfer credits to the incoming students.This manual implementation is not only labor-intensive but also influenced by undue bias and administrative complexity.The proposed research article focuses on identifying a model that exploits the advancements in the field of Natural Language Processing(NLP)to effectively automate this process.Given the unique structure,domain specificity,and complexity of learning outcomes(LOs),a need for designing a tailor-made model arises.The proposed model uses a clustering-inspired methodology based on knowledge-based semantic similarity measures to assess the taxonomic similarity of LOs and a transformer-based semantic similarity model to assess the semantic similarity of the LOs.The similarity between LOs is further aggregated to form course to course similarity.Due to the lack of quality benchmark datasets,a new benchmark dataset containing seven course-to-course similarity measures is proposed.Understanding the inherent need for flexibility in the decision-making process the aggregation part of the model offers tunable parameters to accommodate different levels of leniency.While providing an efficient model to assess the similarity between courses with existing resources,this research work also steers future research attempts to apply NLP in the field of articulation in an ideal direction by highlighting the persisting research gaps.展开更多
Most software systems have different stakeholders with a variety of concerns.The process of collecting requirements from a large number of stakeholders is vital but challenging.We propose an efficient,automatic approa...Most software systems have different stakeholders with a variety of concerns.The process of collecting requirements from a large number of stakeholders is vital but challenging.We propose an efficient,automatic approach to collecting requirements from different stakeholders’responses to a specific question.We use natural language processing techniques to get the stakeholder response that represents most other stakeholders’responses.This study improves existing practices in three ways:Firstly,it reduces the human effort needed to collect the requirements;secondly,it reduces the time required to carry out this task with a large number of stakeholders;thirdly,it underlines the importance of using of data mining techniques in various software engineering steps.Our approach uses tokenization,stop word removal,and word lemmatization to create a list of frequently accruing words.It then creates a similarity matrix to calculate the score value for each response and selects the answer with the highest score.Our experiments show that using this approach significantly reduces the time and effort needed to collect requirements and does so with a sufficient degree of accuracy.展开更多
With the acceleration of network communication in the 5G era,the volume of data communication in cyberspace has increased unprecedentedly.The speed of data transmission will accelerate.Subsequently,the security of net...With the acceleration of network communication in the 5G era,the volume of data communication in cyberspace has increased unprecedentedly.The speed of data transmission will accelerate.Subsequently,the security of network communication data becomes more and more serious.Among them,malicious cross⁃site scripting leading to the leakage of user information is very serious.This article uses URL attribute analysis method and YARA rule to process data for cross⁃site scripting based on the long short⁃term memory(LSTM)characteristics of LSTM model.The results show that the LSTM classification model adopted in this paper has higher recall rate and F1⁃score than other machine learning methods,which proves that the method adopted in this paper is feasible.展开更多
文摘As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues.
基金funded by the Informatization Plan of Chinese Academy of Sciences(Grant No.CASWX2021SF-0102)the National Key R&D Program of China(Grant Nos.2022YFA1603903,2022YFA1403800,and 2021YFA0718700)+1 种基金the National Natural Science Foundation of China(Grant Nos.11925408,11921004,and 12188101)the Chinese Academy of Sciences(Grant No.XDB33000000)。
文摘The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the|Deanship of Scientific Research at Umm Al-Qura University|for supporting this work by Grant Code:(22UQU4310373DSR33).
文摘The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.
文摘This paper attempts to approach the interface of a robot from the perspective of virtual assistants.Virtual assistants can also be characterized as the mind of a robot,since they manage communication and action with the rest of the world they exist in.Therefore,virtual assistants can also be described as the brain of a robot and they include a Natural Language Processing(NLP)module for conducting communication in their human-robot interface.This work is focused on inquiring and enhancing the capabilities of this module.The problem is that nothing much is revealed about the nature of the human-robot interface of commercial virtual assistants.Therefore,any new attempt of developing such a capability has to start from scratch.Accordingly,to include corresponding capabilities to a developing NLP system of a virtual assistant,a method of systemic semantic modelling is proposed and applied.For this purpose,the paper briefly reviews the evolution of virtual assistants from the first assistant,in the form of a game,to the latest assistant that has significantly elevated their standards.Then there is a reference to the evolution of their services and their continued offerings,as well as future expectations.The paper presents their structure and the technologies used,according to the data provided by the development companies to the public,while an attempt is made to classify virtual assistants,based on their characteristics and capabilities.Consequently,a robotic NLP interface is being developed,based on the communicative power of a proposed systemic conceptual model that may enhance the NLP capabilities of virtual assistants,being tested through a small natural language dictionary in Greek.
文摘Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.
基金This Research is funded by Researchers Supporting Project Number(RSPD2024R947),King Saud University,Riyadh,Saudi Arabia.
文摘Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode.
基金supported in part by the National Natural Science Foundation of China under Grant 62176109in part by the Tibetan Information Processing and Machine Translation Key Laboratory of Qinghai Province under Grant 2021‐Z‐003+3 种基金in part by the Natural Science Foundation of Gansu Province under Grant 21JR7RA531 and Grant 22JR5RA487in part by the Fundamental Research Funds for the Central Universities under Grant lzujbky‐2022‐23in part by the CAAI‐Huawei MindSpore Open Fund under Grant CAAIXSJLJJ‐2022‐020Ain part by the Supercomputing Center of Lanzhou University,in part by Sichuan Science and Technology Program No.2022nsfsc0916.
文摘A variety of neural networks have been presented to deal with issues in deep learning in the last decades.Despite the prominent success achieved by the neural network,it still lacks theoretical guidance to design an efficient neural network model,and verifying the performance of a model needs excessive resources.Previous research studies have demonstrated that many existing models can be regarded as different numerical discretizations of differential equations.This connection sheds light on designing an effective recurrent neural network(RNN)by resorting to numerical analysis.Simple RNN is regarded as a discretisation of the forward Euler scheme.Considering the limited solution accuracy of the forward Euler methods,a Taylor‐type discrete scheme is presented with lower truncation error and a Taylor‐type RNN(T‐RNN)is designed with its guidance.Extensive experiments are conducted to evaluate its performance on statistical language models and emotion analysis tasks.The noticeable gains obtained by T‐RNN present its superiority and the feasibility of designing the neural network model using numerical methods.
基金supported through the Annual Funding track by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia[Project No.AN000685].
文摘Sentiment analysis(SA)is the procedure of recognizing the emotions related to the data that exist in social networking.The existence of sarcasm in tex-tual data is a major challenge in the efficiency of the SA.Earlier works on sarcasm detection on text utilize lexical as well as pragmatic cues namely interjection,punctuations,and sentiment shift that are vital indicators of sarcasm.With the advent of deep-learning,recent works,leveraging neural networks in learning lexical and contextual features,removing the need for handcrafted feature.In this aspect,this study designs a deep learning with natural language processing enabled SA(DLNLP-SA)technique for sarcasm classification.The proposed DLNLP-SA technique aims to detect and classify the occurrence of sarcasm in the input data.Besides,the DLNLP-SA technique holds various sub-processes namely preprocessing,feature vector conversion,and classification.Initially,the pre-processing is performed in diverse ways such as single character removal,multi-spaces removal,URL removal,stopword removal,and tokenization.Secondly,the transformation of feature vectors takes place using the N-gram feature vector technique.Finally,mayfly optimization(MFO)with multi-head self-attention based gated recurrent unit(MHSA-GRU)model is employed for the detection and classification of sarcasm.To verify the enhanced outcomes of the DLNLP-SA model,a comprehensive experimental investigation is performed on the News Headlines Dataset from Kaggle Repository and the results signified the supremacy over the existing approaches.
文摘One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.
文摘This work is about the progress of previous related work based on an experiment to improve the intelligence of robotic systems,with the aim of achieving more linguistic communication capabilities between humans and robots.In this paper,the authors attempt an algorithmic approach to natural language generation through hole semantics and by applying the OMAS-III computational model as a grammatical formalism.In the original work,a technical language is used,while in the later works,this has been replaced by a limited Greek natural language dictionary.This particular effort was made to give the evolving system the ability to ask questions,as well as the authors developed an initial dialogue system using these techniques.The results show that the use of these techniques the authors apply can give us a more sophisticated dialogue system in the future.
文摘In this research paper, we research on the automatic pattern abstraction and recognition method for large-scale database system based on natural language processing. In distributed database, through the network connection between nodes, data across different nodes and even regional distribution are well recognized. In order to reduce data redundancy and model design of the database will usually contain a lot of forms we combine the NLP theory to optimize the traditional method. The experimental analysis and simulation proves the correctness of our method.
基金This work was co-funded by the European Research Council for the project ScienceGRAPH(Grant agreement ID:819536)by the TIB Leibniz Information Centre for Science and Technology.
文摘Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.
基金the funding support from the National Natural Science Foundation of China (No. 81874429)Digital and Applied Research Platform for Diagnosis of Traditional Chinese Medicine (No. 49021003005)+1 种基金2018 Hunan Provincial Postgraduate Research Innovation Project (No. CX2018B465)Excellent Youth Project of Hunan Education Department in 2018 (No. 18B241)
文摘Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis environment based on Python language, and built a corpus based on the core chapters of SESD. The second step was to digitalize the corpus. The main steps included word segmentation, information cleaning and merging, document-entry matrix, dictionary compilation and information conversion. The third step was to mine and display the internal information of SESD corpus by means of word cloud, keyword extraction and visualization. Results NLP played a positive role in computer recognition and comprehension of SESD. Different chapters had different keywords and weights. Deficiency syndrome elements were an important component of SESD, such as "Qi deficiency""Yang deficiency" and "Yin deficiency". The important syndrome elements of substantiality included "Blood stasis""Qi stagnation", etc. Core syndrome elements were closely related. Conclusions Syndrome differentiation and treatment was the core of SESD. Using NLP to excavate syndromes differentiation could help reveal the internal relationship between syndromes differentiation and provide basis for artificial intelligence to learn syndromes differentiation.
文摘COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
文摘The recent advancements made in World Wide Web and social networking have eased the spread of fake news among people at a faster rate.At most of the times,the intention of fake news is to misinform the people and make manipulated societal insights.The spread of low-quality news in social networking sites has a negative influence upon people as well as the society.In order to overcome the ever-increasing dissemination of fake news,automated detection models are developed using Artificial Intelligence(AI)and Machine Learning(ML)methods.The latest advancements in Deep Learning(DL)models and complex Natural Language Processing(NLP)tasks make the former,a significant solution to achieve Fake News Detection(FND).In this background,the current study focuses on design and development of Natural Language Processing with Sea Turtle Foraging Optimizationbased Deep Learning Technique for Fake News Detection and Classification(STODL-FNDC)model.The aim of the proposed STODL-FNDC model is to discriminate fake news from legitimate news in an effectual manner.In the proposed STODL-FNDC model,the input data primarily undergoes pre-processing and Glove-based word embedding.Besides,STODL-FNDC model employs Deep Belief Network(DBN)approach for detection as well as classification of fake news.Finally,STO algorithm is utilized after adjusting the hyperparameters involved in DBN model,in an optimal manner.The novelty of the study lies in the design of STO algorithm with DBN model for FND.In order to improve the detection performance of STODL-FNDC technique,a series of simulations was carried out on benchmark datasets.The experimental outcomes established the better performance of STODL-FNDC approach over other methods with a maximum accuracy of 95.50%.
文摘Student mobility or academic mobility involves students moving between institutions during their post-secondary education,and one of the challenging tasks in this process is to assess the transfer credits to be offered to the incoming student.In general,this process involves domain experts comparing the learning outcomes of the courses,to decide on offering transfer credits to the incoming students.This manual implementation is not only labor-intensive but also influenced by undue bias and administrative complexity.The proposed research article focuses on identifying a model that exploits the advancements in the field of Natural Language Processing(NLP)to effectively automate this process.Given the unique structure,domain specificity,and complexity of learning outcomes(LOs),a need for designing a tailor-made model arises.The proposed model uses a clustering-inspired methodology based on knowledge-based semantic similarity measures to assess the taxonomic similarity of LOs and a transformer-based semantic similarity model to assess the semantic similarity of the LOs.The similarity between LOs is further aggregated to form course to course similarity.Due to the lack of quality benchmark datasets,a new benchmark dataset containing seven course-to-course similarity measures is proposed.Understanding the inherent need for flexibility in the decision-making process the aggregation part of the model offers tunable parameters to accommodate different levels of leniency.While providing an efficient model to assess the similarity between courses with existing resources,this research work also steers future research attempts to apply NLP in the field of articulation in an ideal direction by highlighting the persisting research gaps.
文摘Most software systems have different stakeholders with a variety of concerns.The process of collecting requirements from a large number of stakeholders is vital but challenging.We propose an efficient,automatic approach to collecting requirements from different stakeholders’responses to a specific question.We use natural language processing techniques to get the stakeholder response that represents most other stakeholders’responses.This study improves existing practices in three ways:Firstly,it reduces the human effort needed to collect the requirements;secondly,it reduces the time required to carry out this task with a large number of stakeholders;thirdly,it underlines the importance of using of data mining techniques in various software engineering steps.Our approach uses tokenization,stop word removal,and word lemmatization to create a list of frequently accruing words.It then creates a similarity matrix to calculate the score value for each response and selects the answer with the highest score.Our experiments show that using this approach significantly reduces the time and effort needed to collect requirements and does so with a sufficient degree of accuracy.
文摘With the acceleration of network communication in the 5G era,the volume of data communication in cyberspace has increased unprecedentedly.The speed of data transmission will accelerate.Subsequently,the security of network communication data becomes more and more serious.Among them,malicious cross⁃site scripting leading to the leakage of user information is very serious.This article uses URL attribute analysis method and YARA rule to process data for cross⁃site scripting based on the long short⁃term memory(LSTM)characteristics of LSTM model.The results show that the LSTM classification model adopted in this paper has higher recall rate and F1⁃score than other machine learning methods,which proves that the method adopted in this paper is feasible.