期刊文献+
共找到358,381篇文章
< 1 2 250 >
每页显示 20 50 100
C反应蛋白/白蛋白比值对2型糖尿病合并急性心肌梗死患者远期不良心脑血管事件的预测价值研究
1
作者 马娟 马盛宗 +2 位作者 燕茹 马学平 贾绍斌 《中国全科医学》 CAS 北大核心 2025年第6期705-712,共8页
背景急性心肌梗死(AMI)是威胁全球公众健康的主要原因之一。虽然已有相应的再灌注治疗策略,但AMI相关的主要不良心脑血管事件(MACCEs)仍然是全世界人口死亡的原因之一。尤其合并糖尿病的AMI患者,因冠状动脉病变复杂,病变程度严重,尽早... 背景急性心肌梗死(AMI)是威胁全球公众健康的主要原因之一。虽然已有相应的再灌注治疗策略,但AMI相关的主要不良心脑血管事件(MACCEs)仍然是全世界人口死亡的原因之一。尤其合并糖尿病的AMI患者,因冠状动脉病变复杂,病变程度严重,尽早发现和判断该部分患者远期预后相对困难,因此寻找相对简便、易获得的实验室指标,有利于为2型糖尿病(T2DM)合并AMI患者经皮冠状动脉介入(PCI)术后MACCEs的预测提供依据。目的探讨血清C反应蛋白(CRP)/白蛋白(Alb)比值(CAR)对T2DM合并AMI患者PCI术后远期MACCEs的预测价值。方法纳入2014—2019年就诊于宁夏医科大学总医院心血管内科1683例T2DM合并AMI患者为研究对象,收集患者的一般临床资料与检查结果。对所有患者进行电话或门诊随访,以全因死亡、非致死性心肌梗死、再发不稳定型心绞痛、非致死性脑卒中、新发心力衰竭或心力衰竭加重再入院、再次血运重建作为MACCEs。根据患者随访期间是否发生MACCEs分为MACCEs组(508例)和非MACCEs组(1175例)。采用单因素及多因素Logistic回归分析探讨T2DM合并AMI患者MACCEs事件的影响因素。采用Kaplan-Meier法绘制患者的生存曲线,生存曲线的比较采用Log-rank检验。采用受试者工作特征(ROC)曲线分析CAR对T2DM合并AMI患者远期发生MACCEs的预测效能,使用净重分类改善指标(NRI)和综合判别指数(IDI)评价CAR对T2DM合并AMI患者预后评估的改善效果。结果1683例患者中508例(30.18%)患者发生MACCEs。多因素Logistic回归分析显示高血压病[OR(95%CI)=1.994(1.142~3.483)]、冠状动脉植入支架长度[OR(95%CI)=1.031(1.002~1.062)]、CRP[OR(95%CI)=0.950(0.915~0.986)]、Alb[OR(95%CI)=0.933(0.880~0.989)]及CAR[OR(95%CI)=5.582(1.705~18.277)]是T2DM合并AMI患者PCI术后发生MACCEs的影响因素(P<0.05)。根据CAR中位表达水平(0.86),将患者分为CAR<0.86组和CAR≥0.86组,Log-rank检验结果显示,CAR≥0.86组MACCEs发生率高于CAR<0.86组(52.68%与22.92%;χ^(2)=65.65,P<0.001)。ROC曲线显示CAR预测T2DM合并AMI患者发生MACCEs的ROC曲线下面积为0.728(95%CI=0.702~0.754),最佳截断值为0.576,灵敏度为0.617,特异度为0.747。在基线模型基础上,与CRP、Alb相比,CAR能明显改善对患者发生MACCEs的预测效果(NRI=0.377,IDI=0.166,C指数=0.690;P<0.05)。结论CAR是T2DM合并AMI患者PCI术后远期MACCEs发生风险的有效预测指标。 展开更多
关键词 心肌梗死 糖尿病 2型 主要不良心脑血管事件 c反应蛋白 白蛋白 预测
下载PDF
Evolution and Prospects of Foundation Models: From Large Language Models to Large Multimodal Models 被引量:1
2
作者 Zheyi Chen Liuchang Xu +5 位作者 Hongting Zheng Luyao Chen Amr Tolba Liang Zhao Keping Yu Hailin Feng 《Computers, Materials & Continua》 SCIE EI 2024年第8期1753-1808,共56页
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ... Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field. 展开更多
关键词 Artificial intelligence large language models large multimodal models foundation models
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications 被引量:1
3
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large language Models PII Leakage Privacy Memorization OVERFITTING Membership Inference Attack (MIA)
下载PDF
Enhancing Communication Accessibility:UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals
4
作者 Khushal Das Fazeel Abid +4 位作者 Jawad Rasheed Kamlish Tunc Asuroglu Shtwai Alsubai Safeeullah Soomro 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期689-711,共23页
Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still ... Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still lacking.Unlike other SLs,the visuals of the Urdu Language are different.This study presents a novel approach to translating Urdu sign language(UrSL)using the UrSL-CNN model,a convolutional neural network(CNN)architecture specifically designed for this purpose.Unlike existingworks that primarily focus on languageswith rich resources,this study addresses the challenge of translating a sign language with limited resources.We conducted experiments using two datasets containing 1500 and 78,000 images,employing a methodology comprising four modules:data collection,pre-processing,categorization,and prediction.To enhance prediction accuracy,each sign image was transformed into a greyscale image and underwent noise filtering.Comparative analysis with machine learning baseline methods(support vectormachine,GaussianNaive Bayes,randomforest,and k-nearest neighbors’algorithm)on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN,achieving an accuracy of 0.95.Additionally,our model exhibited superior performance in Precision,Recall,and F1-score evaluations.This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments. 展开更多
关键词 convolutional neural networks Pakistan sign language visual language
下载PDF
Plain language in the healthcare of Japan:a systematic review of“plain Japanese”
5
作者 Hatsune Kido Soichiro Saeki +5 位作者 Mayu Hiraiwa Masashi Yasunaga Rie Tomizawa Chika Honda Toshio Fukuoka Kaori Minamitani 《Global Health Journal》 2024年第3期113-118,共6页
Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign p... Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign patients,especially residents,is inadequate,with language barriers potentially causing health disparities.Comprehensive interpretation and translation services are challenging,but“plain Japanese”may be a viable alternative for foreign patients with basic Japanese language skills.This study explores the application and obstacles of plain Japanese in the medical sector.Methods:A literature review was performed across these databases:Web of Science,PubMed,Google Scholar,Scopus,CINAHL Plus,Springer Link and Ichushi-Web(Japanese medical literature).The search covered themes related to healthcare,care for foreign patients,and scholarly articles,and was conducted in July 2023.Results:The study incorporated five papers.Each paper emphasized the language barriers foreign residents in Japan face when accessing healthcare,highlighting the critical role and necessity of plain Japanese in medical environments.Most of the reports focused on the challenges of delivering medical care to foreign patients and the training of healthcare professionals in using plain Japanese for communication.Conclusion:The knowledge and application of plain Japanese among healthcare professionals are inadequate,and literature also remains scarce.With the increasing number of foreign residents in Japan,the establishment of a healthcare system that effectively uses plain Japanese is essential.However,plain Japanese may not be the optimal linguistic assistance in certain situations,thus it is imperative to encourage more research and reports on healthcare services using plain Japanese. 展开更多
关键词 Plain Japanese Easy Japanese Plain language Foreign residents Healthcareaccess language barriers Emigrants and immigrants
下载PDF
Systematizing Teacher Development:A Review of Foreign Language Teacher Learning
6
作者 Guang ZENG 《Chinese Journal of Applied Linguistics》 2024年第3期518-523,526,共7页
Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Prof... Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations. 展开更多
关键词 foreign language teacher learning foreign language teacher education foreign language teaching teacher development
下载PDF
SR9009联合吲哚丙酸通过核因子κB信号通路减轻C2C12成肌细胞的炎症反应
7
作者 姬慧慧 蒋旭 +7 位作者 张志敏 邢运虹 王亮亮 李娜 宋雨庭 罗旭光 崔慧林 曹锡梅 《中国组织工程研究》 CAS 北大核心 2025年第6期1220-1229,共10页
背景:钟基因Rev-erbα参与调节炎症,但激活Rev-erbα会增加心脑血管疾病风险。为降低相关风险,探索Rev-erbα激动剂SR9009联合其他药物来减轻骨骼肌成肌细胞炎症,奠定治疗炎症相关性骨骼肌萎缩的理论基础。目的:探讨脂多糖刺激C2C12成... 背景:钟基因Rev-erbα参与调节炎症,但激活Rev-erbα会增加心脑血管疾病风险。为降低相关风险,探索Rev-erbα激动剂SR9009联合其他药物来减轻骨骼肌成肌细胞炎症,奠定治疗炎症相关性骨骼肌萎缩的理论基础。目的:探讨脂多糖刺激C2C12成肌细胞时吲哚丙酸、SR9009与核因子κB信号通路的关系。方法:①1μg/mL脂多糖刺激C2C12成肌细胞,RNA转录组测序结合KEGG通路富集分析信号通路。②CCK-8法检测C2C12成肌细胞活性,筛选吲哚丙酸的最佳给药浓度;然后将细胞分为空白对照组、脂多糖(1μg/mL)组、SR9009(10μmol/L)+脂多糖组、吲哚丙酸(80μmol/L)+脂多糖组、吲哚丙酸+SR9009+脂多糖组,ELISA检测细胞上清液中白细胞介素6水平,RT-qPCR检测白细胞介素6、肿瘤坏死因子α、Toll样受体4、CD14 mRNA表达,Western blot检测NF-κB p65、p-NF-κB p65蛋白表达。③siRNA敲减Rev-erbα,RT-qPCR评估敲减效率,检测白细胞介素6、肿瘤坏死因子αmRNA表达。结果与结论:①与空白对照组比较,脂多糖时间依赖性抑制成肌细胞融合形成肌管,白细胞介素6、肿瘤坏死因子αmRNA表达水平升高,细胞上清液中白细胞介素6水平显著升高;KEGG通路分析支持脂多糖刺激激活核因子κB信号通路。②吲哚丙酸浓度>80μmol/L时抑制C2C12成肌细胞活性;吲哚丙酸和SR9009通过抑制核因子κB信号通路发挥抗炎作用,降低白细胞介素6、肿瘤坏死因子α、Toll样受体4、CD14 mRNA表达水平,p-NF-κB p65/NF-κB p65蛋白表达比值低于脂多糖组。SR9009联合吲哚丙酸显著降低脂多糖诱导的炎症,Toll样受体4、CD14、白细胞介素6和肿瘤坏死因子αmRNA表达水平进一步下调,p-NF-κB p65/NF-κB p65蛋白表达比值显著低于吲哚丙酸+脂多糖组和SR9009+脂多糖组。③Rev-erbα随脂多糖刺激时间依赖性升高;siRNA敲减Rev-erbα效率达58%以上,成功敲减Rev-erbα后添加脂多糖,白细胞介素6和肿瘤坏死因子αmRNA表达较脂多糖组显著上调。④结果说明,Rev-erbα可以作为调节炎症反应的靶点,SR9009靶向激活Rev-erbα联合吲哚丙酸能抑制核因子κB信号通路显著减轻C2C12成肌细胞的炎症反应,联合抗炎效果优于单独干预。 展开更多
关键词 Rev-erbα SR9009 吲哚丙酸 脂多糖 核因子ΚB信号通路 c2c12成肌细胞
下载PDF
Literature classification and its applications in condensed matter physics and materials science by natural language processing
8
作者 吴思远 朱天念 +5 位作者 涂思佳 肖睿娟 袁洁 吴泉生 李泓 翁红明 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期117-123,共7页
The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classificatio... The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions. 展开更多
关键词 natural language processing text mining materials science
下载PDF
Automatic Generation of Attribute-Based Access Control Policies from Natural Language Documents
9
作者 Fangfang Shan Zhenyu Wang +1 位作者 Mengyao Liu Menghan Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第9期3881-3902,共22页
In response to the challenges of generating Attribute-Based Access Control(ABAC)policies,this paper proposes a deep learning-based method to automatically generate ABAC policies from natural language documents.This me... In response to the challenges of generating Attribute-Based Access Control(ABAC)policies,this paper proposes a deep learning-based method to automatically generate ABAC policies from natural language documents.This method is aimed at organizations such as companies and schools that are transitioning from traditional access control models to the ABAC model.The manual retrieval and analysis involved in this transition are inefficient,prone to errors,and costly.Most organizations have high-level specifications defined for security policies that include a set of access control policies,which often exist in the form of natural language documents.Utilizing this rich source of information,our method effectively identifies and extracts the necessary attributes and rules for access control from natural language documents,thereby constructing and optimizing access control policies.This work transforms the problem of policy automation generation into two tasks:extraction of access control statements andmining of access control attributes.First,the Chat General Language Model(ChatGLM)isemployed to extract access control-related statements from a wide range of natural language documents by constructing unique prompts and leveraging the model’s In-Context Learning to contextualize the statements.Then,the Iterated Dilated-Convolutions-Conditional Random Field(ID-CNN-CRF)model is used to annotate access control attributes within these extracted statements,including subject attributes,object attributes,and action attributes,thus reassembling new access control policies.Experimental results show that our method,compared to baseline methods,achieved the highest F1 score of 0.961,confirming the model’s effectiveness and accuracy. 展开更多
关键词 Access control policy generation natural language deep learning
下载PDF
Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English
10
作者 Ronghao Pan JoséAntonio García-Díaz Rafael Valencia-García 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2849-2868,共20页
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning... Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives. 展开更多
关键词 Hate speech detection zero-shot few-shot fine-tuning natural language processing
下载PDF
Enhancing Orthopedic Knowledge Assessments:The Performance of Specialized Generative Language Model Optimization
11
作者 Hong ZHOU Hong-lin WANG +11 位作者 Yu-yu DUAN Zi-neng YAN Rui LUO Xiang-xin LV Yi XIE Jia-yao ZHANG Jia-ming YANG Ming-di XUE Ying FANG Lin LU Peng-ran LIU Zhe-wei YE 《Current Medical Science》 SCIE CAS 2024年第5期1001-1005,共5页
Objective This study aimed to evaluate and compare the effectiveness of knowledge base-optimized and unoptimized large language models(LLMs)in the field of orthopedics to explore optimization strategies for the applic... Objective This study aimed to evaluate and compare the effectiveness of knowledge base-optimized and unoptimized large language models(LLMs)in the field of orthopedics to explore optimization strategies for the application of LLMs in specific fields.Methods This research constructed a specialized knowledge base using clinical guidelines from the American Academy of Orthopaedic Surgeons(AAOS)and authoritative orthopedic publications.A total of 30 orthopedic-related questions covering aspects such as anatomical knowledge,disease diagnosis,fracture classification,treatment options,and surgical techniques were input into both the knowledge base-optimized and unoptimized versions of the GPT-4,ChatGLM,and Spark LLM,with their generated responses recorded.The overall quality,accuracy,and comprehensiveness of these responses were evaluated by 3 experienced orthopedic surgeons.Results Compared with their unoptimized LLMs,the optimized version of GPT-4 showed improvements of 15.3%in overall quality,12.5%in accuracy,and 12.8%in comprehensiveness;ChatGLM showed improvements of 24.8%,16.1%,and 19.6%,respectively;and Spark LLM showed improvements of 6.5%,14.5%,and 24.7%,respectively.Conclusion The optimization of knowledge bases significantly enhances the quality,accuracy,and comprehensiveness of the responses provided by the 3 models in the orthopedic field.Therefore,knowledge base optimization is an effective method for improving the performance of LLMs in specific fields. 展开更多
关键词 artificial intelligence large language models generative articial intelligence ORTHOPEDIcS
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
12
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
LKPNR: Large Language Models and Knowledge Graph for Personalized News Recommendation Framework
13
作者 Hao Chen Runfeng Xie +4 位作者 Xiangyang Cui Zhou Yan Xin Wang Zhanwei Xuan Kai Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4283-4296,共14页
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text... Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR. 展开更多
关键词 Large language models news recommendation knowledge graphs(KG)
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
14
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
15
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
16
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Mapping the Research Landscape of Language Development in Autistic Children: A Preliminary Scientometric Analysis
17
作者 Zhonghua WU Le CHENG 《Chinese Journal of Applied Linguistics》 2024年第4期670-686,688,共18页
Children with autism spectrum disorder(ASD)often encounter difficulties in language learning and utilization,a concern that has gained significant academic attention,particularly given the widespread occurrence of ASD... Children with autism spectrum disorder(ASD)often encounter difficulties in language learning and utilization,a concern that has gained significant academic attention,particularly given the widespread occurrence of ASD globally.Previous reviews,however,have relied on empirical observations rather than a more rigorous selection criterion.This preliminary study seeks to systematize the scientific knowledge base regarding language development in autistic children by utilizing the analysis tool Citespace 6.2.R5.We visualized and analyzed research patterns and trends regarding autism by drawing data from the Web of Science.Through document citation and emerging trend analyses,seven key research clusters and their chronological associations are identified,along with research hotspots such as language disorder diagnosis and intervention,social communication,language acquisition,and multilingual and multicultural influences.Research findings show that there exist some issues with the current research,including small sample sizes,the need for further investigation into receptive language development,and a lack of cross-cultural comparative studies.Meanwhile,the scope and depth of interdisciplinary research on language development in autistic children also need to be further enhanced.The research contributes to the extant literature by providing valuable references for autism researchers and practitioners. 展开更多
关键词 language development autistic children scientometric analysis ASD tentative study
下载PDF
Impact of transcranial electrical stimulation on serum neurotrophic factors and language function in patients with speech disorders
18
作者 Li Sun Kai Xiao +1 位作者 Xiao-Yan Shen Shu Wang 《World Journal of Clinical Cases》 SCIE 2024年第10期1742-1749,共8页
BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and p... BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and patient compliance.Transcranial electrical stimulation(TES)has emerged as a promising non-invasive treatment to improve neurological functions.However,its effectiveness in enhancing language functions and serum neurofactor levels in individuals with speech disorders requires further investigation.AIM To investigate the impact of TES in conjunction with standard therapies on serum neurotrophic factor levels and language function in patients with speech disorders.METHODS In a controlled study spanning from March 2019 to November 2021,81 patients with speech disorders were divided into a control group(n=40)receiving standard speech stimulation and psychological intervention,and an observation group(n=41)receiving additional TES.The study assessed serum levels of ciliary neurotrophic factor(CNTF),glial cell-derived neurotrophic factor(GDNF),brainderived neurotrophic factor(BDNF),and nerve growth factor(NGF),as well as evaluations of motor function,language function,and development quotient scores.RESULTS After 3 wk of intervention,the observation group exhibited significantly higher serum levels of CNTF,GDNF,BDNF,and NGF compared to the control group.Moreover,improvements were noted in motor function,cognitive function,language skills,physical abilities,and overall development quotient scores.It is worth mentioning that the observation group also displayed superior perfor CONCLUSION This retrospective study concluded that TES combined with traditional speech and psychotherapy can effectively increase the levels of neurokines in the blood and enhance language function in patients with speech disorders.These results provide a promising avenue for integrating TES into standard treatment methods for speech disorders. 展开更多
关键词 Transcranial electrical stimulation Serum neurofactor levels Developmental level language features
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
19
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 chinese Sign language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
20
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese Sign language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部