As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidab...As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.展开更多
Fine-tuning pre-trained language models like BERT have become an effective way in natural language processing(NLP)and yield state-of-the-art results on many downstream tasks.Recent studies on adapting BERT to new task...Fine-tuning pre-trained language models like BERT have become an effective way in natural language processing(NLP)and yield state-of-the-art results on many downstream tasks.Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure,re-designing the pre-training tasks,and leveraging external data and knowledge.The fine-tuning strategy itself has yet to be fully explored.In this paper,we improve the fine-tuning of BERT with two effective mechanisms:self-ensemble and self-distillation.The self-ensemble mechanism utilizes the checkpoints from an experience pool to integrate the teacher model.In order to transfer knowledge from the teacher model to the student model efficiently,we further use knowledge distillation,which is called self-distillation because the distillation comes from the model itself through the time dimension.Experiments on the GLUE benchmark and the Text Classification benchmark show that our proposed approach can significantly improve the adaption of BERT without any external data or knowledge.We conduct exhaustive experiments to investigate the efficiency of the self-ensemble and self-distillation mechanisms,and our proposed approach achieves a new state-of-the-art result on the SNLI dataset.展开更多
在互联网时代,越来越多的财务公司选择在财经新闻平台上发表自己的见解,这些评论文本作为舆情的载体,可以充分反映财务公司的情绪,影响公众的投资决策和市场走势.情感分析为分析海量的经济类文本情感类型提供了有效的研究手段.但是,由...在互联网时代,越来越多的财务公司选择在财经新闻平台上发表自己的见解,这些评论文本作为舆情的载体,可以充分反映财务公司的情绪,影响公众的投资决策和市场走势.情感分析为分析海量的经济类文本情感类型提供了有效的研究手段.但是,由于特定领域文本的专业性和大标签数据集的不适用性,经济类文本情感分析给传统的情感分析模型带来了巨大的挑战.当将一般情感分析模型应用于经济等特定领域时,模型在准确率与召回率上表现较差.为了克服这些挑战,文章针对财经新闻平台上的经济类文本的情感分析任务,从词表示模型出发,提出了基于知识蒸馏方法的双路BERT(Two-way BERT based on knowledge distillation method)情感分析模型,与文本卷积神经网络(Text-CNN)、卷积递归神经网络(CRNN)、双向长时和短时记忆网络(Bi-LSTM)等算法进行对比实验,结果得出该改进方法相较于其他算法在准确率、召回率和F1值均提升了1%~3%,具有较好的泛化性能.展开更多
针对民航陆空通话领域语料难以获取、实体分布不均,以及意图信息提取中实体规范不足且准确率有待提升等问题,为了更好地提取陆空通话意图信息,提出一种融合本体的基于双向转换编码器(bidirectional encoder representations from transf...针对民航陆空通话领域语料难以获取、实体分布不均,以及意图信息提取中实体规范不足且准确率有待提升等问题,为了更好地提取陆空通话意图信息,提出一种融合本体的基于双向转换编码器(bidirectional encoder representations from transformers,BERT)与生成对抗网络(generative adversarial network,GAN)的陆空通话意图信息挖掘方法,并引入航班池信息对提取的部分信息进行校验修正,形成空中交通管制(air traffic control,ATC)系统可理解的结构化信息。首先,使用改进的GAN模型进行陆空通话智能文本生成,可有效进行数据增强,平衡各类实体信息分布并扩充数据集;然后,根据欧洲单一天空空中交通管理项目定义的本体规则进行意图的分类与标注;之后,通过BERT预训练模型生成字向量并解决一词多义问题,利用双向长短时记忆(bidirectional long short-term memory,BiLSTM)网络双向编码提取上下句语义特征,同时将该语义特征送入条件随机场(conditional random field,CRF)模型进行推理预测,学习标签的依赖关系并加以约束,以获取全局最优结果;最后,根据编辑距离(edit distance,ED)算法进行意图信息合理性校验与修正。对比实验结果表明,所提方法的宏平均F_(1)值达到了98.75%,在民航陆空通话数据集上的意图挖掘性能优于其他主流模型,为其加入数字化进程奠定了基础。展开更多
文摘As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.
基金supported by the National Key Research and Development Program of China under Grant No.2020AAA0106700the National Natural Science Foundation of China under Grant No.62022027.
文摘Fine-tuning pre-trained language models like BERT have become an effective way in natural language processing(NLP)and yield state-of-the-art results on many downstream tasks.Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure,re-designing the pre-training tasks,and leveraging external data and knowledge.The fine-tuning strategy itself has yet to be fully explored.In this paper,we improve the fine-tuning of BERT with two effective mechanisms:self-ensemble and self-distillation.The self-ensemble mechanism utilizes the checkpoints from an experience pool to integrate the teacher model.In order to transfer knowledge from the teacher model to the student model efficiently,we further use knowledge distillation,which is called self-distillation because the distillation comes from the model itself through the time dimension.Experiments on the GLUE benchmark and the Text Classification benchmark show that our proposed approach can significantly improve the adaption of BERT without any external data or knowledge.We conduct exhaustive experiments to investigate the efficiency of the self-ensemble and self-distillation mechanisms,and our proposed approach achieves a new state-of-the-art result on the SNLI dataset.
文摘在互联网时代,越来越多的财务公司选择在财经新闻平台上发表自己的见解,这些评论文本作为舆情的载体,可以充分反映财务公司的情绪,影响公众的投资决策和市场走势.情感分析为分析海量的经济类文本情感类型提供了有效的研究手段.但是,由于特定领域文本的专业性和大标签数据集的不适用性,经济类文本情感分析给传统的情感分析模型带来了巨大的挑战.当将一般情感分析模型应用于经济等特定领域时,模型在准确率与召回率上表现较差.为了克服这些挑战,文章针对财经新闻平台上的经济类文本的情感分析任务,从词表示模型出发,提出了基于知识蒸馏方法的双路BERT(Two-way BERT based on knowledge distillation method)情感分析模型,与文本卷积神经网络(Text-CNN)、卷积递归神经网络(CRNN)、双向长时和短时记忆网络(Bi-LSTM)等算法进行对比实验,结果得出该改进方法相较于其他算法在准确率、召回率和F1值均提升了1%~3%,具有较好的泛化性能.
文摘针对民航陆空通话领域语料难以获取、实体分布不均,以及意图信息提取中实体规范不足且准确率有待提升等问题,为了更好地提取陆空通话意图信息,提出一种融合本体的基于双向转换编码器(bidirectional encoder representations from transformers,BERT)与生成对抗网络(generative adversarial network,GAN)的陆空通话意图信息挖掘方法,并引入航班池信息对提取的部分信息进行校验修正,形成空中交通管制(air traffic control,ATC)系统可理解的结构化信息。首先,使用改进的GAN模型进行陆空通话智能文本生成,可有效进行数据增强,平衡各类实体信息分布并扩充数据集;然后,根据欧洲单一天空空中交通管理项目定义的本体规则进行意图的分类与标注;之后,通过BERT预训练模型生成字向量并解决一词多义问题,利用双向长短时记忆(bidirectional long short-term memory,BiLSTM)网络双向编码提取上下句语义特征,同时将该语义特征送入条件随机场(conditional random field,CRF)模型进行推理预测,学习标签的依赖关系并加以约束,以获取全局最优结果;最后,根据编辑距离(edit distance,ED)算法进行意图信息合理性校验与修正。对比实验结果表明,所提方法的宏平均F_(1)值达到了98.75%,在民航陆空通话数据集上的意图挖掘性能优于其他主流模型,为其加入数字化进程奠定了基础。
文摘源代码漏洞检测常使用代码指标、机器学习和深度学习等技术.但是这些技术存在无法保留源代码中的句法和语义信息、需要大量专家知识对漏洞特征进行定义等问题.为应对现有技术存在的问题,提出基于BERT(bidirectional encoder representations from transformers)模型的源代码漏洞检测模型.该模型将需要检测的源代码分割为多个小样本,将每个小样本转换成近似自然语言的形式,通过BERT模型实现源代码中漏洞特征的自动提取,然后训练具有良好性能的漏洞分类器,实现Python语言多种类型漏洞的检测.该模型在不同类型的漏洞中实现了平均99.2%的准确率、97.2%的精确率、96.2%的召回率和96.7%的F1分数的检测水平,对比现有的漏洞检测方法有2%~14%的性能提升.实验结果表明,该模型是一种通用的、轻量级的、可扩展的漏洞检测方法.
文摘目前高血压已成为严重危害全球公共健康的重大问题。区别于传统的侵入式和袖带法的血压测量方式,为实时监测血压并助力早期诊断,本文专注于研究脉搏波波形与血压之间的内在关系,并提出了一种使用脉搏波的基于改进BERT(Bidirectional encoder representationns from transformers)模型的血压预测方法。方法首先应用巴特沃斯滤波器对原始脉搏波信号进行滤波预处理并周期性划分,然后结合深度学习技术,采用改进后的BERT模型,对划分后的脉搏波周期数据进行特征提取和分析。为验证本方法预测的有效性和准确性,采用MIMIC-Ⅲ数据库的数据进行实验。实验结果表明,本方法可以有效预测血压值,完全满足英国高血压学会的A类标准。通过深入研究脉搏波与血压的关系,本文改进BERT模型为高血压的预测与诊断提供了新的技术手段。