期刊文献+
共找到98,047篇文章
< 1 2 250 >
每页显示 20 50 100
Embracing different languages and local differences:Coconstructive patient simulation strengthens host countries’clinical training in psychiatry
1
作者 Şafak ErayÇamlı Büşra Ece Yavuz +6 位作者 Meliha Feyza Gök Idil Yazgan Yanki Yazgan Ayelet Brand-Gothelf Doron Gothelf Doron Amsalem Andrés Martin 《World Journal of Psychiatry》 SCIE 2024年第1期111-118,共8页
BACKGROUND Global education in psychiatry is heavily influenced by knowledge from Western,high-income countries,which obscures local voices and expertise.AIM To adapt a human simulation model to psychiatric education ... BACKGROUND Global education in psychiatry is heavily influenced by knowledge from Western,high-income countries,which obscures local voices and expertise.AIM To adapt a human simulation model to psychiatric education in a context that is specific to local languages and cultures.METHODS We conducted an observational study consisting of six human simulation sessions with standardized patients from two host countries,speaking their native languages,and following an adaptation of the co-constructive patient simulation(CCPS)model.As local faculty became increasingly familiar with the CCPS approach,they took on the role of facilitators—in their country’s native language.RESULTS Fifty-three learners participated:19 child and adolescent psychiatry trainees and 3 faculty members in Türkiye(as a group that met online during 3 consecutive months);and 24 trainees and 7 faculty in Israel(divided into 3 groups,in parallel in-person sessions during a single training day).Each of the six cases reflected local realities and clinical challenges,and was associated with specific learning goals identified by each case-writing trainee.CONCLUSION Human simulation has not been fully incorporated into psychiatric education:The creation of immersive clinical experiences and the strengthening of reflective practice are two areas ripe for development.Our adaptations of CCPS can also strengthen local and regional networks and psychiatric communities of practice.Finally,the model can help question and press against hegemonies in psychiatric training that overshadow local expertise. 展开更多
关键词 Human simulation Standardized patients Medical education Psychiatric education Capacity building Local languages
下载PDF
Languages
2
作者 《疯狂英语(初中天地)》 2023年第6期64-65,共2页
According to a survey published by the European Commission,the British are officially the worst language learners in Europe—62 percent of them can’t speak any other language apart from their own!While 38 percent of ... According to a survey published by the European Commission,the British are officially the worst language learners in Europe—62 percent of them can’t speak any other language apart from their own!While 38 percent of Britons speak at least one foreign language,only 18 percent speak two. 展开更多
关键词 OWN SPEAK languagE
下载PDF
Literature classification and its applications in condensed matter physics and materials science by natural language processing
3
作者 吴思远 朱天念 +5 位作者 涂思佳 肖睿娟 袁洁 吴泉生 李泓 翁红明 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期117-123,共7页
The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classificatio... The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions. 展开更多
关键词 natural language processing text mining materials science
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
4
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications
5
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large language Models PII Leakage Privacy Memorization OVERFITTING Membership Inference Attack (MIA)
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
6
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
7
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
8
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Impact of transcranial electrical stimulation on serum neurotrophic factors and language function in patients with speech disorders
9
作者 Li Sun Kai Xiao +1 位作者 Xiao-Yan Shen Shu Wang 《World Journal of Clinical Cases》 SCIE 2024年第10期1742-1749,共8页
BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and p... BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and patient compliance.Transcranial electrical stimulation(TES)has emerged as a promising non-invasive treatment to improve neurological functions.However,its effectiveness in enhancing language functions and serum neurofactor levels in individuals with speech disorders requires further investigation.AIM To investigate the impact of TES in conjunction with standard therapies on serum neurotrophic factor levels and language function in patients with speech disorders.METHODS In a controlled study spanning from March 2019 to November 2021,81 patients with speech disorders were divided into a control group(n=40)receiving standard speech stimulation and psychological intervention,and an observation group(n=41)receiving additional TES.The study assessed serum levels of ciliary neurotrophic factor(CNTF),glial cell-derived neurotrophic factor(GDNF),brainderived neurotrophic factor(BDNF),and nerve growth factor(NGF),as well as evaluations of motor function,language function,and development quotient scores.RESULTS After 3 wk of intervention,the observation group exhibited significantly higher serum levels of CNTF,GDNF,BDNF,and NGF compared to the control group.Moreover,improvements were noted in motor function,cognitive function,language skills,physical abilities,and overall development quotient scores.It is worth mentioning that the observation group also displayed superior perfor CONCLUSION This retrospective study concluded that TES combined with traditional speech and psychotherapy can effectively increase the levels of neurokines in the blood and enhance language function in patients with speech disorders.These results provide a promising avenue for integrating TES into standard treatment methods for speech disorders. 展开更多
关键词 Transcranial electrical stimulation Serum neurofactor levels Developmental level language features
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
10
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 Chinese Sign language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
11
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese Sign language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
Smaller & Smarter: Score-Driven Network Chaining of Smaller Language Models
12
作者 Gunika Dhingra Siddansh Chawla +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期23-42,共20页
With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily meas... With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study. 展开更多
关键词 Large language Models (LLMs) Smaller language Models (SLMs) FINANCE NETWORKING Supervisor Model Scoring Function
下载PDF
Sentiment Analysis of Low-Resource Language Literature Using Data Processing and Deep Learning
13
作者 Aizaz Ali Maqbool Khan +2 位作者 Khalil Khan Rehan Ullah Khan Abdulrahman Aloraini 《Computers, Materials & Continua》 SCIE EI 2024年第4期713-733,共21页
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime... Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language. 展开更多
关键词 Urdu sentiment analysis convolutional neural networks recurrent neural network deep learning natural language processing neural networks
下载PDF
Potential use of large language models for mitigating students’problematic social media use:ChatGPT as an example
14
作者 Xin-Qiao Liu Zi-Ru Zhang 《World Journal of Psychiatry》 SCIE 2024年第3期334-341,共8页
The problematic use of social media has numerous negative impacts on individuals'daily lives,interpersonal relationships,physical and mental health,and more.Currently,there are few methods and tools to alleviate p... The problematic use of social media has numerous negative impacts on individuals'daily lives,interpersonal relationships,physical and mental health,and more.Currently,there are few methods and tools to alleviate problematic social media,and their potential is yet to be fully realized.Emerging large language models(LLMs)are becoming increasingly popular for providing information and assistance to people and are being applied in many aspects of life.In mitigating problematic social media use,LLMs such as ChatGPT can play a positive role by serving as conversational partners and outlets for users,providing personalized information and resources,monitoring and intervening in problematic social media use,and more.In this process,we should recognize both the enormous potential and endless possibilities of LLMs such as ChatGPT,leveraging their advantages to better address problematic social media use,while also acknowledging the limitations and potential pitfalls of ChatGPT technology,such as errors,limitations in issue resolution,privacy and security concerns,and potential overreliance.When we leverage the advantages of LLMs to address issues in social media usage,we must adopt a cautious and ethical approach,being vigilant of the potential adverse effects that LLMs may have in addressing problematic social media use to better harness technology to serve individuals and society. 展开更多
关键词 Problematic use of social media Social media Large language models ChatGPT Chatbots
下载PDF
Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) Framework
15
作者 Alicia Biju Vishnupriya Ramesh Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期340-358,共19页
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a... Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks. 展开更多
关键词 Common Vulnerability Scoring System (CVSS) Large language Models (LLMs) DALL-E Prompt Injections Training Data Poisoning CVSS Metrics
下载PDF
History of Western Philosophy and Quantum Language (Including Quantum Mechanics, Statistics, Fuzzy Logic, etc.)
16
作者 Shiro Ishikawa 《Journal of Applied Mathematics and Physics》 2024年第5期1769-1795,共27页
Although there are many different types of philosophy, many philosophers agree that the mainstream of Western philosophy (Socrates, Plato, Aristotle, Descartes, Kant, Wittgenstein) developed toward the perfection of S... Although there are many different types of philosophy, many philosophers agree that the mainstream of Western philosophy (Socrates, Plato, Aristotle, Descartes, Kant, Wittgenstein) developed toward the perfection of Socrates’ absolutism. But can the absolutism maintain its central position after analytic philosophy? There are pessimistic views on this problem, such as that of R. Rorty, the standard-bearer of neo-pragmatism. Recently, I proposed quantum language (which is including quantum mechanics, statistics, fuzzy sets, etc.). I think that that this theory is not only one of the most fundamental scientific theories, but also the scientific final destination of Western philosophy. If so, Socrates’ dream has come true. The purpose of this paper is to discuss the above and to inform readers that quantum language has the power to create a paradigm shift from the classical mechanical world view to the quantum mechanical worldview. 展开更多
关键词 Quantum language Linguistic Copenhagen Interpretation Fuzzy Logic
下载PDF
Integrating Chinese Culture Into Language Curriculum:Teaching Chinese Culture to International Students in China
17
作者 ZHANG Wen 《Cultural and Religious Studies》 2024年第3期158-163,共6页
This paper explores the integration of Chinese culture into language education for German students at the University of Shanghai for Science and Technology(USST).Focusing on USST’s Chinese curriculum and pedagogical ... This paper explores the integration of Chinese culture into language education for German students at the University of Shanghai for Science and Technology(USST).Focusing on USST’s Chinese curriculum and pedagogical strategies,the study emphasizes the importance of cultural immersion,experiential learning,and authentic materials.Drawing on Byram’s Intercultural Communicative Competence(ICC)model,the Cultural Studies Approach,and Task-Based Language Teaching(TBLT),the paper presents a case study on incorporating Chinese calligraphy into regular classes.This hands-on approach not only enriches cultural understanding but also enhances language skills.The findings stress the need for tailored,multifaceted pedagogical approaches to prepare international students for cross-cultural interactions in a globalized context. 展开更多
关键词 Chinese culture language education cultural immersion Intercultural Communicative Competence
下载PDF
Language Assessment Feedback Towards Pedagogical Conversational Agent
18
作者 Bhagya Prabhashini C M.Latha 《Sino-US English Teaching》 2024年第5期201-216,共16页
Continuous development of technology provides an opportunity to incorporate feedback in online assessments.The mode of online instruction during the pandemic was the most significant survival change.Technology enabled... Continuous development of technology provides an opportunity to incorporate feedback in online assessments.The mode of online instruction during the pandemic was the most significant survival change.Technology enabled every teacher and student to enter a virtual classroom to make sense of education.Feedback is part of language instruction and is a powerful key to improving students’learning performance.Feedback plays an influential and crucial role in teaching and learning.Feedback is an invaluable,ultimate learning tool for learners that aids them in not committing the same error again and creates impetus.Thus,knowing about formative exam feedback is students’right because quality feedback allures them.Given students’eagerness,providing feedback is considered a good practice to be followed by all the teaching faculty.Apropos of online feedback,the present study attempts to study how pedagogical agents provide online feedback in language assessments.The study also considers the characteristics of pedagogical conversational agents that are suitable for providing feedback in online language assessment.Simply put,the study encapsulates that screen agents play an essential role in students’motivation and acceptability of learning through feedback. 展开更多
关键词 FEEDBACK learning performance online language assessment pedagogical conversational agents
下载PDF
Researching Creativity in Second Language Acquisition:Book Review
19
作者 SHEN Rong-xi ZENG Hai-xiang 《Journal of Literature and Art Studies》 2024年第3期215-219,共5页
In the realm of psychology and language acquisition,the exploration of“individual differences”-encompassing attributes such as personality traits,motivation,and language aptitude-has long been a focal point of schol... In the realm of psychology and language acquisition,the exploration of“individual differences”-encompassing attributes such as personality traits,motivation,and language aptitude-has long been a focal point of scholarly investigation.Within this sphere,the book“Researching Creativity in Second Language Acquisition”by Ashleigh Pipes offers a fresh perspective on the role of creativity as a significant individual difference in Second Language Acquisition(SLA).Positioned against the backdrop of extensive empirical research on individual differences,this book stands out by focusing on the often overlooked aspect of creativity,which Pipes argues,holds paramount importance in SLA. 展开更多
关键词 Second language Acquisition(SLA) CREATIVITY
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部