期刊文献+
共找到106,514篇文章
< 1 2 250 >
每页显示 20 50 100
Embracing different languages and local differences:Coconstructive patient simulation strengthens host countries’clinical training in psychiatry
1
作者 Şafak ErayÇamlı Büşra Ece Yavuz +6 位作者 Meliha Feyza Gök Idil Yazgan Yanki Yazgan Ayelet Brand-Gothelf Doron Gothelf Doron Amsalem Andrés Martin 《World Journal of Psychiatry》 SCIE 2024年第1期111-118,共8页
BACKGROUND Global education in psychiatry is heavily influenced by knowledge from Western,high-income countries,which obscures local voices and expertise.AIM To adapt a human simulation model to psychiatric education ... BACKGROUND Global education in psychiatry is heavily influenced by knowledge from Western,high-income countries,which obscures local voices and expertise.AIM To adapt a human simulation model to psychiatric education in a context that is specific to local languages and cultures.METHODS We conducted an observational study consisting of six human simulation sessions with standardized patients from two host countries,speaking their native languages,and following an adaptation of the co-constructive patient simulation(CCPS)model.As local faculty became increasingly familiar with the CCPS approach,they took on the role of facilitators—in their country’s native language.RESULTS Fifty-three learners participated:19 child and adolescent psychiatry trainees and 3 faculty members in Türkiye(as a group that met online during 3 consecutive months);and 24 trainees and 7 faculty in Israel(divided into 3 groups,in parallel in-person sessions during a single training day).Each of the six cases reflected local realities and clinical challenges,and was associated with specific learning goals identified by each case-writing trainee.CONCLUSION Human simulation has not been fully incorporated into psychiatric education:The creation of immersive clinical experiences and the strengthening of reflective practice are two areas ripe for development.Our adaptations of CCPS can also strengthen local and regional networks and psychiatric communities of practice.Finally,the model can help question and press against hegemonies in psychiatric training that overshadow local expertise. 展开更多
关键词 human simulation Standardized patients Medical education Psychiatric education Capacity building Local languages
下载PDF
胸部CT椎体HU值在2型糖尿病骨质疏松症机会性筛查中的价值 被引量:1
2
作者 王力平 连天星 +4 位作者 胡永荣 杨红胜 曾智谋 刘浩 屈波 《中国组织工程研究》 CAS 北大核心 2024年第6期950-954,共5页
背景:有研究表明基于腰椎CT的HU(hounsfield units)值可以筛查骨质疏松症,而目前因肺部感染就诊的患者增加,肺部感染合并2型糖尿病的患者也随之增加,增加了胸部CT的使用率。目的:探讨胸部CT检查中L_1椎体HU值在2型糖尿病骨质疏松症筛查... 背景:有研究表明基于腰椎CT的HU(hounsfield units)值可以筛查骨质疏松症,而目前因肺部感染就诊的患者增加,肺部感染合并2型糖尿病的患者也随之增加,增加了胸部CT的使用率。目的:探讨胸部CT检查中L_1椎体HU值在2型糖尿病骨质疏松症筛查中的作用。方法:回顾性分析2020年6月至2022年6月成都医学院第一附属医院收治的244例2型糖尿病患者的临床资料。利用双能X射线骨密度仪获得骨密度T值,按WHO的骨质疏松症诊断标准,将研究对象分为非骨质疏松组(n=120)及骨质疏松组(n=124),比较两组患者一般情况、骨密度T值及胸部CT检查中L_(1)椎体的HU值,分析HU值与各部位T值的关系并评估2型糖尿病骨质疏松症的准确性。结果与结论:(1)两组患者性别、年龄、体质量指数、糖化血红蛋白、平均血糖、钙、磷、2型糖尿病患病时间、高血压病史、高脂血症病史之间对比差异无显著性意义(P>0.05);(2)HU值与最低T值呈正相关(r=0.619,P<0.01),与髋部T值呈正相关(r=0.584,P<0.01),与股骨颈T值呈正相关(r=0.641,P<0.01);当HU值取98时,其预测2型糖尿病骨质疏松症具有良好的准确性,敏感性为70.8%;(3)提示基于胸部CT检查的L_1椎体HU值对2型糖尿病患者的骨质疏松症筛查具有良好的价值,可作为2型糖尿病骨质疏松症一种机会性、无成本的补充筛查方法。 展开更多
关键词 胸部CT检查 hu 2型糖尿病 骨质疏松症 骨密度
下载PDF
Language Education Optimization: A New Human-Based Metaheuristic Algorithm for Solving Optimization Problems
3
作者 Pavel Trojovsky Mohammad Dehghani +1 位作者 Eva Trojovská Eva Milkova 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第8期1527-1573,共47页
In this paper,based on the concept of the NFL theorem,that there is no unique algorithm that has the best performance for all optimization problems,a new human-based metaheuristic algorithm called Language Education O... In this paper,based on the concept of the NFL theorem,that there is no unique algorithm that has the best performance for all optimization problems,a new human-based metaheuristic algorithm called Language Education Optimization(LEO)is introduced,which is used to solve optimization problems.LEO is inspired by the foreign language education process in which a language teacher trains the students of language schools in the desired language skills and rules.LEO is mathematically modeled in three phases:(i)students selecting their teacher,(ii)students learning from each other,and(iii)individual practice,considering exploration in local search and exploitation in local search.The performance of LEO in optimization tasks has been challenged against fifty-two benchmark functions of a variety of unimodal,multimodal types and the CEC 2017 test suite.The optimization results show that LEO,with its acceptable ability in exploration,exploitation,and maintaining a balance between them,has efficient performance in optimization applications and solution presentation.LEO efficiency in optimization tasks is compared with ten well-known metaheuristic algorithms.Analyses of the simulation results show that LEO has effective performance in dealing with optimization tasks and is significantly superior andmore competitive in combating the compared algorithms.The implementation results of the proposed approach to four engineering design problems show the effectiveness of LEO in solving real-world optimization applications. 展开更多
关键词 OPTIMIZATION language education EXPLORATION EXPLOITATION metaheuristic algorithm
下载PDF
Arabic Sign Language Gesture Classification Using Deer Hunting Optimization with Machine Learning Model
4
作者 Badriyya B.Al-onazi Mohamed K.Nour +6 位作者 Hussain Alshahran Mohamed Ahmed Elfaki Mrim M.Alnfiai Radwa Marzouk Mahmoud Othman Mahir M.Sharif Abdelwahed Motwakel 《Computers, Materials & Continua》 SCIE EI 2023年第5期3413-3429,共17页
Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enha... Sign language includes the motion of the arms and hands to communicate with people with hearing disabilities.Several models have been available in the literature for sign language detection and classification for enhanced outcomes.But the latest advancements in computer vision enable us to perform signs/gesture recognition using deep neural networks.This paper introduces an Arabic Sign Language Gesture Classification using Deer Hunting Optimization with Machine Learning(ASLGC-DHOML)model.The presented ASLGC-DHOML technique mainly concentrates on recognising and classifying sign language gestures.The presented ASLGC-DHOML model primarily pre-processes the input gesture images and generates feature vectors using the densely connected network(DenseNet169)model.For gesture recognition and classification,a multilayer perceptron(MLP)classifier is exploited to recognize and classify the existence of sign language gestures.Lastly,the DHO algorithm is utilized for parameter optimization of the MLP model.The experimental results of the ASLGC-DHOML model are tested and the outcomes are inspected under distinct aspects.The comparison analysis highlighted that the ASLGC-DHOML method has resulted in enhanced gesture classification results than other techniques with maximum accuracy of 92.88%. 展开更多
关键词 Machine learning sign language recognition multilayer perceptron deer hunting optimization densenet
下载PDF
Language-socialization-based Analysis of Putonghua Promotion’s Influences on Tujia Cultural Identity-A Case Study of Enshi Tujia and Miao Autonomous Prefecture
5
作者 HUANG Jiao 《Journal of Literature and Art Studies》 2023年第2期100-103,共4页
Language functions as a carrier of culture,playing a crucial role in individual socialization into his cultural community,thus it is inextricably interconnected with individual cultural identity.Language acquisition a... Language functions as a carrier of culture,playing a crucial role in individual socialization into his cultural community,thus it is inextricably interconnected with individual cultural identity.Language acquisition and language ability development require necessary sociocultural interactions and practices.China boasts 55 ethnic minorities which have their own distinctive cultures and language varieties.But some of them are experiencing loss of languages and cultural identity.This paper is dedicated to examining the influences of Putonghua Promotion on Tujia cultural identity from the perspectives of Language Socialization. 展开更多
关键词 Putonghua promotion Tujia cultural identity INFLUENCES language socialization
下载PDF
Enhancing Communication Accessibility:UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals
6
作者 Khushal Das Fazeel Abid +4 位作者 Jawad Rasheed Kamlish Tunc Asuroglu Shtwai Alsubai Safeeullah Soomro 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期689-711,共23页
Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still ... Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still lacking.Unlike other SLs,the visuals of the Urdu Language are different.This study presents a novel approach to translating Urdu sign language(UrSL)using the UrSL-CNN model,a convolutional neural network(CNN)architecture specifically designed for this purpose.Unlike existingworks that primarily focus on languageswith rich resources,this study addresses the challenge of translating a sign language with limited resources.We conducted experiments using two datasets containing 1500 and 78,000 images,employing a methodology comprising four modules:data collection,pre-processing,categorization,and prediction.To enhance prediction accuracy,each sign image was transformed into a greyscale image and underwent noise filtering.Comparative analysis with machine learning baseline methods(support vectormachine,GaussianNaive Bayes,randomforest,and k-nearest neighbors’algorithm)on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN,achieving an accuracy of 0.95.Additionally,our model exhibited superior performance in Precision,Recall,and F1-score evaluations.This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments. 展开更多
关键词 Convolutional neural networks Pakistan sign language visual language
下载PDF
F_(m)、P_(n)⊙F_(m)和C_(n)⊙F_(m)的r-hued染色研究
7
作者 西日尼阿依·努尔麦麦提 刘凤霞 《四川师范大学学报(自然科学版)》 CAS 2024年第2期269-274,共6页
给定2个图G和H,它们的corona乘积图记为G⊙H,是将图G拷贝一份、图H拷贝|V(G)|份,图G的第i个顶点和图H的第i个拷贝份的每个顶点连边而得到的图.图G的(k,r)-染色是图G正常k-染色,使得度数为d的每个顶点的邻点至少染min{d,r}种不同的颜色.r... 给定2个图G和H,它们的corona乘积图记为G⊙H,是将图G拷贝一份、图H拷贝|V(G)|份,图G的第i个顶点和图H的第i个拷贝份的每个顶点连边而得到的图.图G的(k,r)-染色是图G正常k-染色,使得度数为d的每个顶点的邻点至少染min{d,r}种不同的颜色.r-hued染色数是最小正整数k,使得图G具有(k,r)-染色,用χr(G)来表示.主要讨论F_(m),P_(n)⊙F_(m)和C_(n)⊙F_(m)的r-hued染色数. 展开更多
关键词 (k r)-染色 r-hued色数 corona乘积图
下载PDF
Languages
8
作者 《疯狂英语(初中天地)》 2023年第6期64-65,共2页
According to a survey published by the European Commission,the British are officially the worst language learners in Europe—62 percent of them can’t speak any other language apart from their own!While 38 percent of ... According to a survey published by the European Commission,the British are officially the worst language learners in Europe—62 percent of them can’t speak any other language apart from their own!While 38 percent of Britons speak at least one foreign language,only 18 percent speak two. 展开更多
关键词 OWN SPEAK languagE
下载PDF
Systematizing Teacher Development:A Review of Foreign Language Teacher Learning
9
作者 Guang ZENG 《Chinese Journal of Applied Linguistics》 2024年第3期518-523,526,共7页
Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Prof... Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations. 展开更多
关键词 foreign language teacher learning foreign language teacher education foreign language teaching teacher development
下载PDF
Single‑cell sequencing reveals the reproductive variations between primiparous and multiparous Hu ewes
10
作者 Ting Ge Yifan Wen +3 位作者 Bo Li Xiaoyu Huang Shaohua Jiang Enping Zhang 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2024年第2期614-631,共18页
Background In the modern sheep production systems,the reproductive performance of ewes determines the economic profitability of farming.Revealing the genetic mechanisms underlying differences in the litter size is imp... Background In the modern sheep production systems,the reproductive performance of ewes determines the economic profitability of farming.Revealing the genetic mechanisms underlying differences in the litter size is important for the selection and breeding of highly prolific ewes.Hu sheep,a high-quality Chinese sheep breed,is known for its high fecundity and is often used as a model to study prolificacy traits.In the current study,animals were divided into two groups according to their delivery rates in three consecutive lambing seasons(namely,the high and low reproductive groups with≥3 lambs and one lamb per season,n=3,respectively).The ewes were slaughtered within 12 h of estrus,and unilateral ovarian tissues were collected and analyzed by 10×Genomics single-cell RNA sequencing.Results A total of 5 types of somatic cells were identified and corresponding expression profiles were mapped in the ovaries of each group.Noticeably,the differences in the ovary somatic cell expression profiles between the high and low reproductive groups were mainly clustered in the granulosa cells.Furthermore,four granulosa cell subtypes were identified.GeneSwitches analysis revealed that the abundance of JPH1 expression and the reduction of LOC101112291 expression could lead to different evolutionary directions of the granulosa cells.Additionally,the expression levels of FTH1 and FTL in mural granulosa cells of the highly reproductive group were significantly higher.These genes inhibit necroptosis and ferroptosis of mural granulosa cells,which helps prevent follicular atresia.Conclusions This study provides insights into the molecular mechanisms underlying the high fecundity of Hu sheep.The differences in gene expression profiles,particularly in the granulosa cells,suggest that these cells play a critical role in female prolificacy.The findings also highlight the importance of genes such as JPH1,LOC101112291,FTH1,and FTL in regulating granulosa cell function and follicular development. 展开更多
关键词 Granulosa cells hu sheep Lambing number Ovarian somatic cells Single-cell RNA sequencing
下载PDF
Literature classification and its applications in condensed matter physics and materials science by natural language processing
11
作者 吴思远 朱天念 +5 位作者 涂思佳 肖睿娟 袁洁 吴泉生 李泓 翁红明 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期117-123,共7页
The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classificatio... The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions. 展开更多
关键词 natural language processing text mining materials science
下载PDF
DeBERTa-GRU: Sentiment Analysis for Large Language Model
12
作者 Adel Assiri Abdu Gumaei +2 位作者 Faisal Mehmood Touqeer Abbas Sami Ullah 《Computers, Materials & Continua》 SCIE EI 2024年第6期4219-4236,共18页
Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whe... Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive,negative,neutral,or any other personal emotion to understand the sentiment context of the text.Sentiment analysis is essential in business and society because it impacts strategic decision-making.Sentiment analysis involves challenges due to lexical variation,an unlabeled dataset,and text distance correlations.The execution time increases due to the sequential processing of the sequence models.However,the calculation times for the Transformer models are reduced because of the parallel processing.This study uses a hybrid deep learning strategy to combine the strengths of the Transformer and Sequence models while ignoring their limitations.In particular,the proposed model integrates the Decoding-enhanced with Bidirectional Encoder Representations from Transformers(BERT)attention(DeBERTa)and the Gated Recurrent Unit(GRU)for sentiment analysis.Using the Decoding-enhanced BERT technique,the words are mapped into a compact,semantic word embedding space,and the Gated Recurrent Unit model can capture the distance contextual semantics correctly.The proposed hybrid model achieves F1-scores of 97%on the Twitter Large Language Model(LLM)dataset,which is much higher than the performance of new techniques. 展开更多
关键词 DeBERTa GRU Naive Bayes LSTM sentiment analysis large language model
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
13
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
Evolution and Prospects of Foundation Models: From Large Language Models to Large Multimodal Models
14
作者 Zheyi Chen Liuchang Xu +5 位作者 Hongting Zheng Luyao Chen Amr Tolba Liang Zhao Keping Yu Hailin Feng 《Computers, Materials & Continua》 SCIE EI 2024年第8期1753-1808,共56页
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ... Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field. 展开更多
关键词 Artificial intelligence large language models large multimodal models foundation models
下载PDF
LKPNR: Large Language Models and Knowledge Graph for Personalized News Recommendation Framework
15
作者 Hao Chen Runfeng Xie +4 位作者 Xiangyang Cui Zhou Yan Xin Wang Zhanwei Xuan Kai Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4283-4296,共14页
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text... Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR. 展开更多
关键词 Large language models news recommendation knowledge graphs(KG)
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
16
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
17
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications
18
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large language Models PII Leakage Privacy Memorization OVERFITTING Membership Inference Attack (MIA)
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
19
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Impact of transcranial electrical stimulation on serum neurotrophic factors and language function in patients with speech disorders
20
作者 Li Sun Kai Xiao +1 位作者 Xiao-Yan Shen Shu Wang 《World Journal of Clinical Cases》 SCIE 2024年第10期1742-1749,共8页
BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and p... BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and patient compliance.Transcranial electrical stimulation(TES)has emerged as a promising non-invasive treatment to improve neurological functions.However,its effectiveness in enhancing language functions and serum neurofactor levels in individuals with speech disorders requires further investigation.AIM To investigate the impact of TES in conjunction with standard therapies on serum neurotrophic factor levels and language function in patients with speech disorders.METHODS In a controlled study spanning from March 2019 to November 2021,81 patients with speech disorders were divided into a control group(n=40)receiving standard speech stimulation and psychological intervention,and an observation group(n=41)receiving additional TES.The study assessed serum levels of ciliary neurotrophic factor(CNTF),glial cell-derived neurotrophic factor(GDNF),brainderived neurotrophic factor(BDNF),and nerve growth factor(NGF),as well as evaluations of motor function,language function,and development quotient scores.RESULTS After 3 wk of intervention,the observation group exhibited significantly higher serum levels of CNTF,GDNF,BDNF,and NGF compared to the control group.Moreover,improvements were noted in motor function,cognitive function,language skills,physical abilities,and overall development quotient scores.It is worth mentioning that the observation group also displayed superior perfor CONCLUSION This retrospective study concluded that TES combined with traditional speech and psychotherapy can effectively increase the levels of neurokines in the blood and enhance language function in patients with speech disorders.These results provide a promising avenue for integrating TES into standard treatment methods for speech disorders. 展开更多
关键词 Transcranial electrical stimulation Serum neurofactor levels Developmental level language features
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部