In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural N...In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural Network),YOLO(You Only Look Once),and their variants have demonstrated superiority in quickly detecting objects from images and videos,creating new opportunities to enhance automatic and efficient fire detection.The YOLO model,especially newer versions like YOLOv10,stands out for its fast processing capability,making it suitable for low-latency applications.However,when applied to real-world datasets,the accuracy of fire prediction is still not high.This study improves the accuracy of YOLOv10 for real-time applications through model fine-tuning techniques and data augmentation.The core work of the research involves creating a diverse fire image dataset specifically suited for fire detection applications in buildings and factories,freezing the initial layers of the model to retain general features learned from the dataset by applying the Squeeze and Excitation attention mechanism and employing the Stochastic Gradient Descent(SGD)with a momentum optimization algorithm to enhance accuracy while ensuring real-time fire detection.Experimental results demonstrate the effectiveness of the proposed fire prediction approach,where the YOLOv10 small model exhibits the best balance compared to other YOLO family models such as nano,medium,and balanced.Additionally,the study provides an experimental evaluation to highlight the effectiveness of model fine-tuning compared to the YOLOv10 baseline,YOLOv8 and Faster R-CNN based on two criteria:accuracy and prediction time.展开更多
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir...Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated...Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.展开更多
As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidab...As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.展开更多
There are numerous microorganisms in nature capable of synthesizing diverse useful compounds;however,these natural microorganisms are generally inefficient in the production of target products on an industrial scale,r...There are numerous microorganisms in nature capable of synthesizing diverse useful compounds;however,these natural microorganisms are generally inefficient in the production of target products on an industrial scale,relative to either chemical synthesis or extraction methods.To achieve industrial production of useful compounds,these natural microorganisms must undergo a certain degree of mutation or effective fine-tuning strategies.This review describes how to achieve an ideal metabolic fine-tuned process,including static control strategies and dynamic control strategies.The static control strategies mainly focus on various matabolic engineering strategies,including protein engineering,upregulation/downregulation,and combinatrorial control of these metabolic engineering strategies,to enhance the flexibility of their application in fine-tuned metabolic metworks.Then,we focus on the dynamic control strategies for fine-tuned metabolic metworks.The design principles derived would guide us to construct microbial cell factories for various useful compounds.展开更多
为了提高效率,降低培训成本并推广使用计算机来取代管制模拟机中的飞行员席位,采用集成学习的策略来生成飞行员复诵指令。选用5个大规模预训练语言模型进行微调,并使用K折交叉验证来筛选出性能较好的4个模型作为基础模型来构建集成学习...为了提高效率,降低培训成本并推广使用计算机来取代管制模拟机中的飞行员席位,采用集成学习的策略来生成飞行员复诵指令。选用5个大规模预训练语言模型进行微调,并使用K折交叉验证来筛选出性能较好的4个模型作为基础模型来构建集成学习模型。所构建的集成学习模型在管制指令数据集上取得在本领域中的最优效果。在通用的ROUGE(recall-oriented understudy for gisting evaluation)评价标准中,取得R_(OUGE-1)=0.998,R_(OUGE-2)=0.995,R_(OUGE-L)=0.998的最新效果。其中,R_(OUGE-1)关注参考文本与生成文本之间单个单词的匹配度,R_(OUGE-2)则关注两个连续单词的匹配度,R_(OUGE-L)则关注最长公共子序列的匹配度。为了克服通用指标在本领域的局限性,更准确地评估模型性能,针对生成的复诵指令提出一套基于关键词的评价标准。该评价指标准基于管制文本分词后的结果计算各个关键词指标来评估模型的效果。在基于关键词的评价标准下,所构建模型取得整体准确率为0.987的最优效果,对航空器呼号的复诵准确率达到0.998。展开更多
文摘In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural Network),YOLO(You Only Look Once),and their variants have demonstrated superiority in quickly detecting objects from images and videos,creating new opportunities to enhance automatic and efficient fire detection.The YOLO model,especially newer versions like YOLOv10,stands out for its fast processing capability,making it suitable for low-latency applications.However,when applied to real-world datasets,the accuracy of fire prediction is still not high.This study improves the accuracy of YOLOv10 for real-time applications through model fine-tuning techniques and data augmentation.The core work of the research involves creating a diverse fire image dataset specifically suited for fire detection applications in buildings and factories,freezing the initial layers of the model to retain general features learned from the dataset by applying the Squeeze and Excitation attention mechanism and employing the Stochastic Gradient Descent(SGD)with a momentum optimization algorithm to enhance accuracy while ensuring real-time fire detection.Experimental results demonstrate the effectiveness of the proposed fire prediction approach,where the YOLOv10 small model exhibits the best balance compared to other YOLO family models such as nano,medium,and balanced.Additionally,the study provides an experimental evaluation to highlight the effectiveness of model fine-tuning compared to the YOLOv10 baseline,YOLOv8 and Faster R-CNN based on two criteria:accuracy and prediction time.
文摘Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
基金financial support from the SERB-SURE under file number of SUR/2022/003129Jong Hyeok Park acknowledges the support of the National Research Foundation of Korea (NRF)funded by the Ministry of Science and ICT (RS-2023-00302697,RS-2023-00268523).
文摘Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.
文摘As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.
基金This work was supported by the National Key Research and Development Program of China(2017YFC1600403)the National Science Fund for Excellent Young Scholars(21822806)+2 种基金the National Natural Science Foundation of China(31670095,31770097)the Fundamental Research Funds for the Central Universities(JUSRP51701A)the National First-class Discipline Program of Light Industry Technology and Engineering(LITE2018-08).
文摘There are numerous microorganisms in nature capable of synthesizing diverse useful compounds;however,these natural microorganisms are generally inefficient in the production of target products on an industrial scale,relative to either chemical synthesis or extraction methods.To achieve industrial production of useful compounds,these natural microorganisms must undergo a certain degree of mutation or effective fine-tuning strategies.This review describes how to achieve an ideal metabolic fine-tuned process,including static control strategies and dynamic control strategies.The static control strategies mainly focus on various matabolic engineering strategies,including protein engineering,upregulation/downregulation,and combinatrorial control of these metabolic engineering strategies,to enhance the flexibility of their application in fine-tuned metabolic metworks.Then,we focus on the dynamic control strategies for fine-tuned metabolic metworks.The design principles derived would guide us to construct microbial cell factories for various useful compounds.
文摘为了提高效率,降低培训成本并推广使用计算机来取代管制模拟机中的飞行员席位,采用集成学习的策略来生成飞行员复诵指令。选用5个大规模预训练语言模型进行微调,并使用K折交叉验证来筛选出性能较好的4个模型作为基础模型来构建集成学习模型。所构建的集成学习模型在管制指令数据集上取得在本领域中的最优效果。在通用的ROUGE(recall-oriented understudy for gisting evaluation)评价标准中,取得R_(OUGE-1)=0.998,R_(OUGE-2)=0.995,R_(OUGE-L)=0.998的最新效果。其中,R_(OUGE-1)关注参考文本与生成文本之间单个单词的匹配度,R_(OUGE-2)则关注两个连续单词的匹配度,R_(OUGE-L)则关注最长公共子序列的匹配度。为了克服通用指标在本领域的局限性,更准确地评估模型性能,针对生成的复诵指令提出一套基于关键词的评价标准。该评价指标准基于管制文本分词后的结果计算各个关键词指标来评估模型的效果。在基于关键词的评价标准下,所构建模型取得整体准确率为0.987的最优效果,对航空器呼号的复诵准确率达到0.998。