[目的/意义]在人工智能技术及应用快速发展与深刻变革背景下,机器学习领域不断出现新的研究主题和方法,深度学习和强化学习技术持续发展。因此,有必要探索不同领域机器学习研究主题演化过程,并识别出热点与新兴主题。[方法/过程]本文以...[目的/意义]在人工智能技术及应用快速发展与深刻变革背景下,机器学习领域不断出现新的研究主题和方法,深度学习和强化学习技术持续发展。因此,有必要探索不同领域机器学习研究主题演化过程,并识别出热点与新兴主题。[方法/过程]本文以图书情报领域中2011—2022年Web of Science数据库中的机器学习研究论文为例,融合LDA和Word2vec方法进行主题建模和主题演化分析,引入主题强度、主题影响力、主题关注度与主题新颖性指标识别热点主题与新兴热点主题。[结果/结论]研究结果表明,(1)Word2vec语义处理能力与LDA主题演化能力的结合能够更加准确地识别研究主题,直观展示研究主题的分阶段演化规律;(2)图书情报领域的机器学习研究主题主要分为自然语言处理与文本分析、数据挖掘与分析、信息与知识服务三大类范畴。各类主题之间的关联性较强,且具有主题关联演化特征;(3)设计的主题强度、主题影响力和主题关注度指标及综合指标能够较好地识别出2011—2014年、2015—2018年和2019—2022年3个不同周期阶段的热点主题。展开更多
安全是民航业的核心主题。针对目前民航非计划事件分析严重依赖专家经验及分析效率低下的问题,文章提出一种结合Word2vec和双向长短期记忆(bidirectional long short-term memory,BiLSTM)神经网络模型的民航非计划事件分析方法。首先采...安全是民航业的核心主题。针对目前民航非计划事件分析严重依赖专家经验及分析效率低下的问题,文章提出一种结合Word2vec和双向长短期记忆(bidirectional long short-term memory,BiLSTM)神经网络模型的民航非计划事件分析方法。首先采用Word2vec模型针对事件文本语料进行词向量训练,缩小空间向量维度;然后通过BiLSTM模型自动提取特征,获取事件文本的完整序列信息和上下文特征向量;最后采用softmax函数对民航非计划事件进行分类。实验结果表明,所提出的方法分类效果更好,能达到更优的准确率和F 1值,对不平衡数据样本同样具有较稳定的分类性能,证明了该方法在民航非计划事件分析上的适用性和有效性。展开更多
以编目分类和规则匹配为主的古籍文本主题分类方法存在工作效能低、专家知识依赖性强、分类依据单一化、古籍文本主题自动分类难等问题。对此,本文结合古籍文本内容和文字特征,尝试从古籍内容分类得到符合研究者需求的主题,推动数字人...以编目分类和规则匹配为主的古籍文本主题分类方法存在工作效能低、专家知识依赖性强、分类依据单一化、古籍文本主题自动分类难等问题。对此,本文结合古籍文本内容和文字特征,尝试从古籍内容分类得到符合研究者需求的主题,推动数字人文研究范式的转型。首先,参照东汉古籍《说文解字》对文字的分析方式,以前期标注的古籍语料数据集为基础,构建全新的“字音(说)-原文(文)-结构(解)-字形(字)”四维特征数据集。其次,设计四维特征向量提取模型(speaking,word,pattern,and font to vector,SWPF2vec),并结合预训练模型实现对古籍文本细粒度的特征表示。再其次,构建融合卷积神经网络、循环神经网络和多头注意力机制的古籍文本主题分类模型(dianji-recurrent convolutional neural networks for text classification,DJ-TextRCNN)。最后,融入四维语义特征,实现对古籍文本多维度、深层次、细粒度的语义挖掘。在古籍文本主题分类任务上,DJ-TextRCNN模型在不同维度特征下的主题分类准确率均为最优,在“说文解字”四维特征下达到76.23%的准确率,初步实现了对古籍文本的精准主题分类。展开更多
当今通用人工智能(AGI)发展火热,各大语言模型(LLMs)层出不穷。大语言模型的广泛应用大大提高了人们的工作水平和效率,但大语言模型也并非完美的,同样伴随着诸多缺点。如:敏感数据安全性、幻觉性、时效性等。同时对于通用大语言模型来讲...当今通用人工智能(AGI)发展火热,各大语言模型(LLMs)层出不穷。大语言模型的广泛应用大大提高了人们的工作水平和效率,但大语言模型也并非完美的,同样伴随着诸多缺点。如:敏感数据安全性、幻觉性、时效性等。同时对于通用大语言模型来讲,对于一些专业领域问题的回答并不是很准确,这就需要检索增强生成(RAG)技术的支持。尤其是在智慧医疗领域方面,由于相关数据的缺乏,不能发挥出大语言模型优秀的对话和解决问题的能力。本算法通过使用Jieba分词,Word2Vec模型对文本数据进行词嵌入,计算句子间的向量相似度并做重排序,帮助大语言模型快速筛选出最可靠可信的模型外部的医疗知识数据,再根据编写相关的提示词(Prompt),可以使大语言模型针对医生或患者的问题提供令人满意的答案。Nowadays, general artificial intelligence is developing rapidly, and major language models are emerging one after another. The widespread application of large language models has greatly improved people’s work level and efficiency, but large language models are not perfect and are also accompanied by many shortcomings. Such as: data security, illusion, timeliness, etc. At the same time, for general large language models, the answers to questions in some professional fields are not very accurate, which requires the support of RAG technology. Especially in the field of smart medical care, due to the lack of relevant data, the excellent conversation and problem-solving capabilities of the large language model cannot be brought into play. This algorithm uses Jieba word segmentation and the Word2Vec model to embed text data, calculate the vector similarity between sentences and reorder them, helping the large language model to quickly screen out the most reliable and trustworthy medical knowledge data outside the model, and then write relevant prompts to enable the large language model to provide satisfactory answers to doctors or patients’ questions.展开更多
With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification...With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.展开更多
文摘[目的/意义]在人工智能技术及应用快速发展与深刻变革背景下,机器学习领域不断出现新的研究主题和方法,深度学习和强化学习技术持续发展。因此,有必要探索不同领域机器学习研究主题演化过程,并识别出热点与新兴主题。[方法/过程]本文以图书情报领域中2011—2022年Web of Science数据库中的机器学习研究论文为例,融合LDA和Word2vec方法进行主题建模和主题演化分析,引入主题强度、主题影响力、主题关注度与主题新颖性指标识别热点主题与新兴热点主题。[结果/结论]研究结果表明,(1)Word2vec语义处理能力与LDA主题演化能力的结合能够更加准确地识别研究主题,直观展示研究主题的分阶段演化规律;(2)图书情报领域的机器学习研究主题主要分为自然语言处理与文本分析、数据挖掘与分析、信息与知识服务三大类范畴。各类主题之间的关联性较强,且具有主题关联演化特征;(3)设计的主题强度、主题影响力和主题关注度指标及综合指标能够较好地识别出2011—2014年、2015—2018年和2019—2022年3个不同周期阶段的热点主题。
文摘安全是民航业的核心主题。针对目前民航非计划事件分析严重依赖专家经验及分析效率低下的问题,文章提出一种结合Word2vec和双向长短期记忆(bidirectional long short-term memory,BiLSTM)神经网络模型的民航非计划事件分析方法。首先采用Word2vec模型针对事件文本语料进行词向量训练,缩小空间向量维度;然后通过BiLSTM模型自动提取特征,获取事件文本的完整序列信息和上下文特征向量;最后采用softmax函数对民航非计划事件进行分类。实验结果表明,所提出的方法分类效果更好,能达到更优的准确率和F 1值,对不平衡数据样本同样具有较稳定的分类性能,证明了该方法在民航非计划事件分析上的适用性和有效性。
文摘以编目分类和规则匹配为主的古籍文本主题分类方法存在工作效能低、专家知识依赖性强、分类依据单一化、古籍文本主题自动分类难等问题。对此,本文结合古籍文本内容和文字特征,尝试从古籍内容分类得到符合研究者需求的主题,推动数字人文研究范式的转型。首先,参照东汉古籍《说文解字》对文字的分析方式,以前期标注的古籍语料数据集为基础,构建全新的“字音(说)-原文(文)-结构(解)-字形(字)”四维特征数据集。其次,设计四维特征向量提取模型(speaking,word,pattern,and font to vector,SWPF2vec),并结合预训练模型实现对古籍文本细粒度的特征表示。再其次,构建融合卷积神经网络、循环神经网络和多头注意力机制的古籍文本主题分类模型(dianji-recurrent convolutional neural networks for text classification,DJ-TextRCNN)。最后,融入四维语义特征,实现对古籍文本多维度、深层次、细粒度的语义挖掘。在古籍文本主题分类任务上,DJ-TextRCNN模型在不同维度特征下的主题分类准确率均为最优,在“说文解字”四维特征下达到76.23%的准确率,初步实现了对古籍文本的精准主题分类。
文摘当今通用人工智能(AGI)发展火热,各大语言模型(LLMs)层出不穷。大语言模型的广泛应用大大提高了人们的工作水平和效率,但大语言模型也并非完美的,同样伴随着诸多缺点。如:敏感数据安全性、幻觉性、时效性等。同时对于通用大语言模型来讲,对于一些专业领域问题的回答并不是很准确,这就需要检索增强生成(RAG)技术的支持。尤其是在智慧医疗领域方面,由于相关数据的缺乏,不能发挥出大语言模型优秀的对话和解决问题的能力。本算法通过使用Jieba分词,Word2Vec模型对文本数据进行词嵌入,计算句子间的向量相似度并做重排序,帮助大语言模型快速筛选出最可靠可信的模型外部的医疗知识数据,再根据编写相关的提示词(Prompt),可以使大语言模型针对医生或患者的问题提供令人满意的答案。Nowadays, general artificial intelligence is developing rapidly, and major language models are emerging one after another. The widespread application of large language models has greatly improved people’s work level and efficiency, but large language models are not perfect and are also accompanied by many shortcomings. Such as: data security, illusion, timeliness, etc. At the same time, for general large language models, the answers to questions in some professional fields are not very accurate, which requires the support of RAG technology. Especially in the field of smart medical care, due to the lack of relevant data, the excellent conversation and problem-solving capabilities of the large language model cannot be brought into play. This algorithm uses Jieba word segmentation and the Word2Vec model to embed text data, calculate the vector similarity between sentences and reorder them, helping the large language model to quickly screen out the most reliable and trustworthy medical knowledge data outside the model, and then write relevant prompts to enable the large language model to provide satisfactory answers to doctors or patients’ questions.
基金Supported by the Sichuan Science and Technology Program (2021YFQ0003).
文摘With the development of Internet technology,the explosive growth of Internet information presentation has led to difficulty in filtering effective information.Finding a model with high accuracy for text classification has become a critical problem to be solved by text filtering,especially for Chinese texts.This paper selected the manually calibrated Douban movie website comment data for research.First,a text filtering model based on the BP neural network has been built;Second,based on the Term Frequency-Inverse Document Frequency(TF-IDF)vector space model and the doc2vec method,the text word frequency vector and the text semantic vector were obtained respectively,and the text word frequency vector was linearly reduced by the Principal Component Analysis(PCA)method.Third,the text word frequency vector after dimensionality reduction and the text semantic vector were combined,add the text value degree,and the text synthesis vector was constructed.Experiments show that the model combined with text word frequency vector degree after dimensionality reduction,text semantic vector,and text value has reached the highest accuracy of 84.67%.