期刊文献+
共找到101,480篇文章
< 1 2 250 >
每页显示 20 50 100
Mathematica在物理化学公式可视化中的应用
1
作者 袁汝明 张来英 +2 位作者 徐晓明 吴平平 傅钢 《大学化学》 CAS 2024年第8期375-382,共8页
随着计算机技术的快速发展,软件工具在化学教学和研究中的作用日益凸显。本文以Mathematica软件为例,探讨了其在物理化学公式可视化中的应用。借助几个典型的例子,如范德华状态方程、原子轨道与电子云形状以及化学反应速率方程的可视化... 随着计算机技术的快速发展,软件工具在化学教学和研究中的作用日益凸显。本文以Mathematica软件为例,探讨了其在物理化学公式可视化中的应用。借助几个典型的例子,如范德华状态方程、原子轨道与电子云形状以及化学反应速率方程的可视化,为物理化学的教学和研究提供新的视角和方法。通过互动式的学习模式,学生不仅能够通过直观的图形来观察和理解理论背后的物理意义,还能激发他们的兴趣,增强学生学习的主动性和创造性。 展开更多
关键词 mathematica 物理化学 公式 可视化
下载PDF
基于Mathematica的平行光斜入射光栅衍射的模拟和可视化研究
2
作者 高峰 马超 +2 位作者 孙丰伟 张红 赵文丽 《山东农业大学学报(自然科学版)》 北大核心 2024年第5期797-803,共7页
光栅衍射作为大学物理的重要教学内容和教学难点,由于其理论推导和数学表达形式复杂,学生难以形成清晰的物理图像,导致课堂教学效果不理想。教材中仅对光垂直入射光栅时的衍射特性进行了讨论,但平行光斜入射光栅时的衍射特性却未被充分... 光栅衍射作为大学物理的重要教学内容和教学难点,由于其理论推导和数学表达形式复杂,学生难以形成清晰的物理图像,导致课堂教学效果不理想。教材中仅对光垂直入射光栅时的衍射特性进行了讨论,但平行光斜入射光栅时的衍射特性却未被充分探讨,且这一领域的研究仍然有限。本文首先应用光的衍射理论推导出了平行光斜入射条件下单缝衍射和光栅衍射的光强分布表达形式,将单缝衍射和光栅衍射的研究由垂直入射推广到斜入射的情况。然后,应用Mathematica软件中的强大的交互式界面Manipulate,结合Initialization和Limit命令,对平行光斜入射情况下的单缝衍射和光栅衍射的光强分布进行了系统的模拟和仿真,分别绘制出单缝衍射的相对光强分布、多缝干涉的相对光强分布、光栅衍射的相对光强分布和衍射条纹分布情况。通过可视化过程,使学生深刻地理解光栅衍射的光强分布是单缝衍射和多缝干涉共同作用的结果,单缝衍射提供了光强分布的包络,而多缝干涉则决定了衍射条纹的具体亮暗变化,将抽象的光的衍射现象形象直观地动态演示出来,这有助于学生建立清晰的物理图像,深刻地理解单缝衍射和光栅衍射,极大地提高教学效果。 展开更多
关键词 光栅衍射 单缝衍射 平行光斜入射 mathematica 可视化 仿真
下载PDF
基于Mathematica软件分析受约束多体运动问题
3
作者 郭龙飞 姚宏林 +2 位作者 宋铁岭 李柏力 吴昊 《物理通报》 CAS 2024年第9期136-139,共4页
受约束的多体运动涉及质点运动学、动力学和守恒定律等多方面知识点的综合运用,是大学物理学中比较难的一类问题.本文研究了一个典型的受约束多体运动案例,由两个轻质刚体细杆连接的3个小球.在给定了初始条件后,从守恒定律的角度出发,... 受约束的多体运动涉及质点运动学、动力学和守恒定律等多方面知识点的综合运用,是大学物理学中比较难的一类问题.本文研究了一个典型的受约束多体运动案例,由两个轻质刚体细杆连接的3个小球.在给定了初始条件后,从守恒定律的角度出发,并给予约束方程,利用Mathematica软件强大的符号运算和解方程能力对此类问题进行求解,得到系统运动的详细过程,包括质点的位置坐标、速度、加速度等物理量随时间的变化,并将其可视化.此外,还讨论了初始条件的改变对运动过程的影响.利用Mathematica软件求解此类问题使学生摆脱繁琐数学的束缚,将更多精力关注在物理知识和概念上的理解,提高学习的效率. 展开更多
关键词 mathematica 约束 多体运动
下载PDF
基于Mathematica的对心碰撞现象动态仿真
4
作者 成立贤 李明洋 +1 位作者 王佳 顾吉林 《大学物理实验》 2024年第5期107-112,共6页
对心碰撞是碰撞的理想化模型,是经典力学中的重要内容之一。本研究借助Mathematica这一功能强大的编程软件,对对心碰撞这一重要知识点展开了研究,基于经典力学中的相关公式,对其中的完全弹性碰撞、完全非弹性碰撞、非完全弹性碰撞以及... 对心碰撞是碰撞的理想化模型,是经典力学中的重要内容之一。本研究借助Mathematica这一功能强大的编程软件,对对心碰撞这一重要知识点展开了研究,基于经典力学中的相关公式,对其中的完全弹性碰撞、完全非弹性碰撞、非完全弹性碰撞以及常见的三个类碰撞模型—子弹模型、弹簧模型、板块模型均做了精确的动态模拟,考虑了物体形状的影响,制作出了一系列生动形象的动态效果图,并在图中展示了速度等关键物理量的实时变化,有助于学习者建立清晰的物理图像,加强对对心碰撞现象的理解,丰富了物理教学的数字资源库。同时,本文也展现了Mathematica这一编程软件在物理教学资源开发方面的广阔发展前景。 展开更多
关键词 对心碰撞 mathematica 类碰撞 数字资源开发 动态模拟
下载PDF
Enhancing Communication Accessibility:UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals
5
作者 Khushal Das Fazeel Abid +4 位作者 Jawad Rasheed Kamlish Tunc Asuroglu Shtwai Alsubai Safeeullah Soomro 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期689-711,共23页
Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still ... Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still lacking.Unlike other SLs,the visuals of the Urdu Language are different.This study presents a novel approach to translating Urdu sign language(UrSL)using the UrSL-CNN model,a convolutional neural network(CNN)architecture specifically designed for this purpose.Unlike existingworks that primarily focus on languageswith rich resources,this study addresses the challenge of translating a sign language with limited resources.We conducted experiments using two datasets containing 1500 and 78,000 images,employing a methodology comprising four modules:data collection,pre-processing,categorization,and prediction.To enhance prediction accuracy,each sign image was transformed into a greyscale image and underwent noise filtering.Comparative analysis with machine learning baseline methods(support vectormachine,GaussianNaive Bayes,randomforest,and k-nearest neighbors’algorithm)on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN,achieving an accuracy of 0.95.Additionally,our model exhibited superior performance in Precision,Recall,and F1-score evaluations.This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments. 展开更多
关键词 Convolutional neural networks Pakistan sign language visual language
下载PDF
基于Mathematica的物理教学资源开发——以波动现象的动态模拟为例
6
作者 张斯博 刘思雨 +2 位作者 宋思盈 洪许海 周兴玉 《大学物理实验》 2024年第1期85-91,共7页
利用计算机软件进行物理教学资源开发是当代物理教学领域的一项重要内容。新时代的物理教师要掌握现代化的计算机工具,使之成为物理教学改革与创新的利器。作为示例,本文使用数学软件Mathematica对经典物理学中多种典型的波动现象进行... 利用计算机软件进行物理教学资源开发是当代物理教学领域的一项重要内容。新时代的物理教师要掌握现代化的计算机工具,使之成为物理教学改革与创新的利器。作为示例,本文使用数学软件Mathematica对经典物理学中多种典型的波动现象进行了精确的动态模拟,所开发的系列动画形象直观,非常有助于学习者建立清晰的物理图像,从而拓展对这些波动现象的认识,深化对相关波动规律的理解,为物理课程增添了丰富多彩的教学资源。同时,本研究也充分展现了Mathematica软件在物理教学资源开发方面的强大功能和广阔应用前景。 展开更多
关键词 波动现象 动态模拟 物理教学资源开发 mathematica
下载PDF
Systematizing Teacher Development:A Review of Foreign Language Teacher Learning
7
作者 Guang ZENG 《Chinese Journal of Applied Linguistics》 2024年第3期518-523,526,共7页
Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Prof... Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations. 展开更多
关键词 foreign language teacher learning foreign language teacher education foreign language teaching teacher development
下载PDF
Plain language in the healthcare of Japan:a systematic review of“plain Japanese”
8
作者 Hatsune Kido Soichiro Saeki +5 位作者 Mayu Hiraiwa Masashi Yasunaga Rie Tomizawa Chika Honde Toshio Fukuoka Kaori Minamitani 《Global Health Journal》 2024年第3期113-118,共6页
Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign p... Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign patients,especially residents,is inadequate,with language barriers potentially causing health disparities.Comprehensive interpretation and translation services are challenging,but“plain Japanese”may be a viable alternative for foreign patients with basic Japanese language skills.This study explores the application and obstacles of plain Japanese in the medical sector.Methods:A literature review was performed across these databases:Web of Science,PubMed,Google Scholar,Scopus,CINAHL Plus,Springer Link and Ichushi-Web(Japanese medical literature).The search covered themes related to healthcare,care for foreign patients,and scholarly articles,and was conducted in July 2023.Results:The study incorporated five papers.Each paper emphasized the language barriers foreign residents in Japan face when accessing healthcare,highlighting the critical role and necessity of plain Japanese in medical environments.Most of the reports focused on the challenges of delivering medical care to foreign patients and the training of healthcare professionals in using plain Japanese for communication.Conclusion:The knowledge and application of plain Japanese among healthcare professionals are inadequate,and literature also remains scarce.With the increasing number of foreign residents in Japan,the establishment of a healthcare system that effectively uses plain Japanese is essential.However,plain Japanese may not be the optimal linguistic assistance in certain situations,thus it is imperative to encourage more research and reports on healthcare services using plain Japanese. 展开更多
关键词 Plain Japanese Easy Japanese Plain language Foreign residents Healthcareaccess language barriers Emigrants and immigrants
下载PDF
Literature classification and its applications in condensed matter physics and materials science by natural language processing
9
作者 吴思远 朱天念 +5 位作者 涂思佳 肖睿娟 袁洁 吴泉生 李泓 翁红明 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期117-123,共7页
The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classificatio... The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions. 展开更多
关键词 natural language processing text mining materials science
下载PDF
Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English
10
作者 Ronghao Pan JoséAntonio García-Díaz Rafael Valencia-García 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2849-2868,共20页
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning... Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives. 展开更多
关键词 Hate speech detection zero-shot few-shot fine-tuning natural language processing
下载PDF
DeBERTa-GRU: Sentiment Analysis for Large Language Model
11
作者 Adel Assiri Abdu Gumaei +2 位作者 Faisal Mehmood Touqeer Abbas Sami Ullah 《Computers, Materials & Continua》 SCIE EI 2024年第6期4219-4236,共18页
Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whe... Modern technological advancements have made social media an essential component of daily life.Social media allow individuals to share thoughts,emotions,and ideas.Sentiment analysis plays the function of evaluating whether the sentiment of the text is positive,negative,neutral,or any other personal emotion to understand the sentiment context of the text.Sentiment analysis is essential in business and society because it impacts strategic decision-making.Sentiment analysis involves challenges due to lexical variation,an unlabeled dataset,and text distance correlations.The execution time increases due to the sequential processing of the sequence models.However,the calculation times for the Transformer models are reduced because of the parallel processing.This study uses a hybrid deep learning strategy to combine the strengths of the Transformer and Sequence models while ignoring their limitations.In particular,the proposed model integrates the Decoding-enhanced with Bidirectional Encoder Representations from Transformers(BERT)attention(DeBERTa)and the Gated Recurrent Unit(GRU)for sentiment analysis.Using the Decoding-enhanced BERT technique,the words are mapped into a compact,semantic word embedding space,and the Gated Recurrent Unit model can capture the distance contextual semantics correctly.The proposed hybrid model achieves F1-scores of 97%on the Twitter Large Language Model(LLM)dataset,which is much higher than the performance of new techniques. 展开更多
关键词 DeBERTa GRU Naive Bayes LSTM sentiment analysis large language model
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
12
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
LKPNR: Large Language Models and Knowledge Graph for Personalized News Recommendation Framework
13
作者 Hao Chen Runfeng Xie +4 位作者 Xiangyang Cui Zhou Yan Xin Wang Zhanwei Xuan Kai Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4283-4296,共14页
Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news text... Accurately recommending candidate news to users is a basic challenge of personalized news recommendation systems.Traditional methods are usually difficult to learn and acquire complex semantic information in news texts,resulting in unsatisfactory recommendation results.Besides,these traditional methods are more friendly to active users with rich historical behaviors.However,they can not effectively solve the long tail problem of inactive users.To address these issues,this research presents a novel general framework that combines Large Language Models(LLM)and Knowledge Graphs(KG)into traditional methods.To learn the contextual information of news text,we use LLMs’powerful text understanding ability to generate news representations with rich semantic information,and then,the generated news representations are used to enhance the news encoding in traditional methods.In addition,multi-hops relationship of news entities is mined and the structural information of news is encoded using KG,thus alleviating the challenge of long-tail distribution.Experimental results demonstrate that compared with various traditional models,on evaluation indicators such as AUC,MRR,nDCG@5 and nDCG@10,the framework significantly improves the recommendation performance.The successful integration of LLM and KG in our framework has established a feasible way for achieving more accurate personalized news recommendation.Our code is available at https://github.com/Xuan-ZW/LKPNR. 展开更多
关键词 Large language models news recommendation knowledge graphs(KG)
下载PDF
Evolution and Prospects of Foundation Models: From Large Language Models to Large Multimodal Models
14
作者 Zheyi Chen Liuchang Xu +5 位作者 Hongting Zheng Luyao Chen Amr Tolba Liang Zhao Keping Yu Hailin Feng 《Computers, Materials & Continua》 SCIE EI 2024年第8期1753-1808,共56页
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ... Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field. 展开更多
关键词 Artificial intelligence large language models large multimodal models foundation models
下载PDF
Large language models in laparoscopic surgery: A transformative opportunity
15
作者 Partha Pratim Ray 《Laparoscopic, Endoscopic and Robotic Surgery》 2024年第4期174-180,共7页
This opinion paper explores the transformative potential of large language models(LLMs)in laparoscopic surgery and argues for their integration to enhance surgical education,decision support,reporting,and patient care... This opinion paper explores the transformative potential of large language models(LLMs)in laparoscopic surgery and argues for their integration to enhance surgical education,decision support,reporting,and patient care.LLMs can revolutionize surgical education by providing personalized learning experiences and accelerating skill acquisition.Intelligent decision support systems powered by LLMs can assist surgeons in making complex decisions,optimizing surgical workflows,and improving patient outcomes.Moreover,LLMs can automate surgical reporting and generate personalized patient education materials,streamlining documentation and improving patient engagement.However,challenges such as data scarcity,surgical semantic capture,real-time inference,and integration with existing systems need to be addressed for successful LLM integration.The future of laparoscopic surgery lies in the seamless integration of LLMs,enabling autonomous robotic surgery,predictive surgical planning,intraoperative decision support,virtual surgical assistants,and continuous learning.By harnessing the power of LLMs,laparoscopic surgery can be transformed,empowering surgeons and ultimately benefiting patients. 展开更多
关键词 Large language model Artificial intelligence Generative artificial intelligence LAPAROSCOPY SURGERY
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications
16
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large language Models PII Leakage Privacy Memorization OVERFITTING Membership Inference Attack (MIA)
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
17
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
18
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Revealing the Hidden Mathematical Beauties of the Cayley-Hamilton Method
19
作者 Haiduke Sarafian 《American Journal of Computational Mathematics》 2024年第2期257-263,共7页
The inversion of a non-singular square matrix applying a Computer Algebra System (CAS) is straightforward. The CASs make the numeric computation efficient but mock the mathematical characteristics. The algorithms cond... The inversion of a non-singular square matrix applying a Computer Algebra System (CAS) is straightforward. The CASs make the numeric computation efficient but mock the mathematical characteristics. The algorithms conducive to the output are sealed and inaccessible. In practice, other than the CPU timing, the applied inversion method is irrelevant. This research-oriented article discusses one such process, the Cayley-Hamilton (C.H.) [1]. Pursuing the process symbolically reveals its unpublished hidden mathematical characteristics even in the original article [1]. This article expands the general vision of the original named method without altering its practical applications. We have used the famous CAS Mathematica [2]. We have briefed the theory behind the method and applied it to different-sized symbolic and numeric matrices. The results are compared to the named CAS’s sealed, packaged library commands. The codes are given, and the algorithms are unsealed. 展开更多
关键词 Cayley-Hamilton Method Matrix Inversion Linear Algebra Computer Algebra System mathematica
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
20
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部