期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Efficiency-Driven Custom Chatbot Development: Unleashing LangChain, RAG, and Performance-Optimized LLM Fusion
1
作者 S.Vidivelli Manikandan Ramachandran A.Dharunbalaji 《Computers, Materials & Continua》 SCIE EI 2024年第8期2423-2442,共20页
This exploration acquaints a momentous methodology with custom chatbot improvement that focuses on pro-ficiency close by viability.We accomplish this by joining three key innovations:LangChain,Retrieval Augmented Gene... This exploration acquaints a momentous methodology with custom chatbot improvement that focuses on pro-ficiency close by viability.We accomplish this by joining three key innovations:LangChain,Retrieval Augmented Generation(RAG),and enormous language models(LLMs)tweaked with execution proficient strategies like LoRA and QLoRA.LangChain takes into consideration fastidious fitting of chatbots to explicit purposes,guaranteeing engaged and important collaborations with clients.RAG’s web scratching capacities engage these chatbots to get to a tremendous store of data,empowering them to give exhaustive and enlightening reactions to requests.This recovered data is then decisively woven into reaction age utilizing LLMs that have been calibrated with an emphasis on execution productivity.This combination approach offers a triple advantage:further developed viability,upgraded client experience,and extended admittance to data.Chatbots become proficient at taking care of client questions precisely and productively,while instructive and logically pertinent reactions make a more regular and drawing in cooperation for clients.At last,web scratching enables chatbots to address a more extensive assortment of requests by conceding them admittance to a more extensive information base.By digging into the complexities of execution proficient LLM calibrating and underlining the basic job of web-scratched information,this examination offers a critical commitment to propelling custom chatbot plan and execution.The subsequent chatbots feature the monstrous capability of these advancements in making enlightening,easy to understand,and effective conversational specialists,eventually changing the manner in which clients cooperate with chatbots. 展开更多
关键词 LangChain retrieval augumental generation(rag) fine tuning
下载PDF
A reliable knowledge processing framework for combustion science using foundation models
2
作者 Vansh Sharma Venkat Raman 《Energy and AI》 EI 2024年第2期396-416,共21页
This research explores the integration of large language models (LLMs) into scientific data assimilation, focusing on combustion science as a case study. Leveraging foundational models integrated with Retrieval-Augmen... This research explores the integration of large language models (LLMs) into scientific data assimilation, focusing on combustion science as a case study. Leveraging foundational models integrated with Retrieval-Augmented Generation (RAG) framework, the study introduces an approach to process diverse combustion research data, spanning experimental studies, simulations, and literature. The multifaceted nature of combustion research emphasizes the critical role of knowledge processing in navigating and extracting valuable information from a vast and diverse pool of sources. The developed approach minimizes computational and economic expenses while optimizing data privacy and accuracy. It incorporates prompt engineering and offline open-source LLMs, offering user autonomy in selecting base models. The study provides a thorough examination of text segmentation strategies, conducts comparative studies between LLMs, and explores various optimized prompts to demonstrate the effectiveness of the framework. By incorporating an external vector database, the framework outperforms a conventional LLM in generating accurate responses and constructing robust arguments. Additionally, the study delves into the investigation of optimized prompt templates for the purpose of efficient extraction of scientific literature. Furthermore, we present a targeted scaling study to quantify the algorithmic performance of the framework as the number of prompt tokens increases. The research addresses concerns related to hallucinations and false research articles by introducing a custom workflow developed with a detection algorithm to filter out inaccuracies. Despite identified areas for improvement, the framework consistently delivers accurate domain-specific responses with minimal human oversight. The prompt-agnostic approach introduced holds promise for future improvements. The study underscores the significance of integrating LLMs and knowledge processing techniques in scientific research, providing a foundation for advancements in data assimilation and utilization. 展开更多
关键词 Large language models(LLM) Foundation models COMBUSTION Knowledge processing retrieval-augmented generation(rag)
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部