期刊文献+
共找到644篇文章
< 1 2 33 >
每页显示 20 50 100
Enhanced Image Captioning Using Features Concatenation and Efficient Pre-Trained Word Embedding
1
作者 Samar Elbedwehy T.Medhat +1 位作者 Taher Hamza Mohammed F.Alrahmawy 《Computer Systems Science & Engineering》 SCIE EI 2023年第9期3637-3652,共16页
One of the issues in Computer Vision is the automatic development of descriptions for images,sometimes known as image captioning.Deep Learning techniques have made significant progress in this area.The typical archite... One of the issues in Computer Vision is the automatic development of descriptions for images,sometimes known as image captioning.Deep Learning techniques have made significant progress in this area.The typical architecture of image captioning systems consists mainly of an image feature extractor subsystem followed by a caption generation lingual subsystem.This paper aims to find optimized models for these two subsystems.For the image feature extraction subsystem,the research tested eight different concatenations of pairs of vision models to get among them the most expressive extracted feature vector of the image.For the caption generation lingual subsystem,this paper tested three different pre-trained language embedding models:Glove(Global Vectors for Word Representation),BERT(Bidirectional Encoder Representations from Transformers),and TaCL(Token-aware Contrastive Learning),to select from them the most accurate pre-trained language embedding model.Our experiments showed that building an image captioning system that uses a concatenation of the two Transformer based models SWIN(Shiftedwindow)and PVT(PyramidVision Transformer)as an image feature extractor,combined with the TaCL language embedding model is the best result among the other combinations. 展开更多
关键词 Image captioning word embedding CONCATENATION TRANSFORMER
下载PDF
Aspect-Based Sentiment Classification Using Deep Learning and Hybrid of Word Embedding and Contextual Position
2
作者 Waqas Ahmad Hikmat Ullah Khan +3 位作者 Fawaz Khaled Alarfaj Saqib Iqbal Abdullah Mohammad Alomair Naif Almusallam 《Intelligent Automation & Soft Computing》 SCIE 2023年第9期3101-3124,共24页
Aspect-based sentiment analysis aims to detect and classify the sentiment polarities as negative,positive,or neutral while associating them with their identified aspects from the corresponding context.In this regard,p... Aspect-based sentiment analysis aims to detect and classify the sentiment polarities as negative,positive,or neutral while associating them with their identified aspects from the corresponding context.In this regard,prior methodologies widely utilize either word embedding or tree-based rep-resentations.Meanwhile,the separate use of those deep features such as word embedding and tree-based dependencies has become a significant cause of information loss.Generally,word embedding preserves the syntactic and semantic relations between a couple of terms lying in a sentence.Besides,the tree-based structure conserves the grammatical and logical dependencies of context.In addition,the sentence-oriented word position describes a critical factor that influences the contextual information of a targeted sentence.Therefore,knowledge of the position-oriented information of words in a sentence has been considered significant.In this study,we propose to use word embedding,tree-based representation,and contextual position information in combination to evaluate whether their combination will improve the result’s effectiveness or not.In the meantime,their joint utilization enhances the accurate identification and extraction of targeted aspect terms,which also influences their classification process.In this research paper,we propose a method named Attention Based Multi-Channel Convolutional Neural Net-work(Att-MC-CNN)that jointly utilizes these three deep features such as word embedding with tree-based structure and contextual position informa-tion.These three parameters deliver to Multi-Channel Convolutional Neural Network(MC-CNN)that identifies and extracts the potential terms and classifies their polarities.In addition,these terms have been further filtered with the attention mechanism,which determines the most significant words.The empirical analysis proves the proposed approach’s effectiveness compared to existing techniques when evaluated on standard datasets.The experimental results represent our approach outperforms in the F1 measure with an overall achievement of 94%in identifying aspects and 92%in the task of sentiment classification. 展开更多
关键词 Sentiment analysis word embedding aspect extraction consistency tree multichannel convolutional neural network contextual position information
下载PDF
Word Embeddings and Semantic Spaces in Natural Language Processing
3
作者 Peter J. Worth 《International Journal of Intelligence Science》 2023年第1期1-21,共21页
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ... One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP. 展开更多
关键词 Natural Language Processing Vector Space Models Semantic Spaces word embeddings Representation Learning Text Vectorization Machine Learning Deep Learning
下载PDF
基于word embedding的短文本特征扩展与分类 被引量:8
4
作者 孟欣 左万利 《小型微型计算机系统》 CSCD 北大核心 2017年第8期1712-1717,共6页
近几年短文本的大量涌现,给传统的自动文本分类技术带来了挑战.针对短文本特征稀疏、特征覆盖率低等特点,提出了一种基于word embedding扩展短文本特征的分类方法.word embedding是一种词的分布式表示,表示形式为低维连续的向量形式,并... 近几年短文本的大量涌现,给传统的自动文本分类技术带来了挑战.针对短文本特征稀疏、特征覆盖率低等特点,提出了一种基于word embedding扩展短文本特征的分类方法.word embedding是一种词的分布式表示,表示形式为低维连续的向量形式,并且好的word embedding训练模型可以编码很多语言规则和语言模式.本文利用word embedding空间分布特点和其蕴含的线性规则提出了一种新的文本特征扩展方法.结合扩展特征我们分别在谷歌搜索片段、中国日报新闻摘要两类数据集上进行了短文本分类实验,对比于仅使用词袋表示文本特征的分类方法,准确率分别提高:8.59%,7.42%. 展开更多
关键词 word embedding 文本特征 语义推理 短文本分类
下载PDF
基于Word Embedding语义相似度的字母缩略术语消歧 被引量:6
5
作者 于东 荀恩东 《中文信息学报》 CSCD 北大核心 2014年第5期51-59,共9页
该文提出基于Word Embedding的歧义词多个义项语义表示方法,实现基于知识库的无监督字母缩略术语消歧。方法分两步聚类,首先采用显著相似聚类获得高置信度类簇,构造带有语义标签的文档集作为训练数据。利用该数据训练多份Word Embeddin... 该文提出基于Word Embedding的歧义词多个义项语义表示方法,实现基于知识库的无监督字母缩略术语消歧。方法分两步聚类,首先采用显著相似聚类获得高置信度类簇,构造带有语义标签的文档集作为训练数据。利用该数据训练多份Word Embedding模型,以余弦相似度均值表示两个词之间的语义关系。在第二步聚类时,提出使用特征词扩展和语义线性加权来提高歧义分辨能力,提高消歧性能。该方法根据语义相似度扩展待消歧文档的特征词集合,挖掘聚类文档中缺失的语义信息,并使用语义相似度对特征词权重进行线性加权。针对25个多义缩略术语的消歧实验显示,特征词扩展使系统F值提高约4%,使用语义线性加权后F值再提高约2%,达到89.40%。 展开更多
关键词 字母缩略术语 术语消歧 word embedding 语义相似度
下载PDF
基于Word Embedding的遥感影像检测分割 被引量:6
6
作者 尤洪峰 田生伟 +1 位作者 禹龙 吕亚龙 《电子学报》 EI CAS CSCD 北大核心 2020年第1期75-83,共9页
遥感影像检测分割技术通常需提取影像特征并通过深度学习算法挖掘影像的深层特征来实现.然而传统特征(如颜色特征、纹理特征、空间关系特征等)不能充分描述影像语义信息,而单一结构或串联算法无法充分挖掘影像的深层特征和上下文语义信... 遥感影像检测分割技术通常需提取影像特征并通过深度学习算法挖掘影像的深层特征来实现.然而传统特征(如颜色特征、纹理特征、空间关系特征等)不能充分描述影像语义信息,而单一结构或串联算法无法充分挖掘影像的深层特征和上下文语义信息.针对上述问题,本文通过词嵌入将空间关系特征映射成实数密集向量,与颜色、纹理特征的结合.其次,本文构建基于注意力机制下图卷积网络和独立循环神经网络的遥感影像检测分割并联算法(Attention Graph Convolution Networks and Independently Recurrent Neural Network,ATGIR).该算法首先通过注意力机制对结合后的特征进行概率权重分配;然后利用图卷积网络(GCNs)算法对高权重的特征进一步挖掘并生成方向标签,同时使用独立循环神经网络(IndRNN)算法挖掘影像特征中的上下文信息,最后用Sigmoid分类器完成影像检测分割任务.以胡杨林遥感影像检测分割任务为例,我们验证了提出的特征提取方法和ATGIR算法能有效提升胡杨林检测分割任务的性能. 展开更多
关键词 注意力机制 图卷积网络 独立循环神经网络 并联算法 词嵌入
下载PDF
基于word embedding和CNN的情感分类模型 被引量:20
7
作者 蔡慧苹 王丽丹 段书凯 《计算机应用研究》 CSCD 北大核心 2016年第10期2902-2905,2909,共5页
尝试将word embedding和卷积神经网络(CNN)相结合来解决情感分类问题。首先,利用skip-gram模型训练出数据集中每个词的word embedding,然后将每条样本中出现的word embedding组合为二维特征矩阵作为卷积神经网络的输入,此外每次迭代训... 尝试将word embedding和卷积神经网络(CNN)相结合来解决情感分类问题。首先,利用skip-gram模型训练出数据集中每个词的word embedding,然后将每条样本中出现的word embedding组合为二维特征矩阵作为卷积神经网络的输入,此外每次迭代训练过程中,输入特征也作为参数进行更新;其次,设计了一种具有三种不同大小卷积核的神经网络结构,从而完成多种局部抽象特征的自动提取过程。与传统机器学习方法相比,所提出的基于word embedding和CNN的情感分类模型成功地将分类正确率提升了5.04%。 展开更多
关键词 卷积神经网络 自然语言处理 深度学习 词嵌入 情感分类
下载PDF
TWE‐WSD: An effective topical word embedding based word sense disambiguation 被引量:1
8
作者 Lianyin Jia Jilin Tang +3 位作者 Mengjuan Li Jinguo You Jiaman Ding Yinong Chen 《CAAI Transactions on Intelligence Technology》 EI 2021年第1期72-79,共8页
Word embedding has been widely used in word sense disambiguation(WSD)and many other tasks in recent years for it can well represent the semantics of words.However,the existing word embedding methods mostly represent e... Word embedding has been widely used in word sense disambiguation(WSD)and many other tasks in recent years for it can well represent the semantics of words.However,the existing word embedding methods mostly represent each word as a single vector,without considering the homonymy and polysemy of the word;thus,their performances are limited.In order to address this problem,an effective topical word embedding(TWE)‐based WSD method,named TWE‐WSD,is proposed,which integrates Latent Dirichlet Allocation(LDA)and word embedding.Instead of generating a single word vector(WV)for each word,TWE‐WSD generates a topical WV for each word under each topic.Effective integrating strategies are designed to obtain high quality contextual vectors.Extensive experiments on SemEval‐2013 and SemEval‐2015 for English all‐words tasks showed that TWE‐WSD outperforms other state‐of‐the‐art WSD methods,especially on nouns. 展开更多
关键词 embedding word WSD
下载PDF
融合Word2Vec词嵌入的多核卷积神经网络音乐歌词多情感分类方法
9
作者 张昱 冯亚寒 丁千惠 《科学技术与工程》 北大核心 2024年第20期8598-8605,共8页
目前,音乐歌词情感分类大多以二标签极性情感为主,多情感标签分类较少,并且对于情感性不确定的歌词而言,得到的分类性能不高。为了解决多情感标签研究分类的不足,以及提高分类准确性,提出一种利用Word2Vec词嵌入技术,并使用多核卷积神... 目前,音乐歌词情感分类大多以二标签极性情感为主,多情感标签分类较少,并且对于情感性不确定的歌词而言,得到的分类性能不高。为了解决多情感标签研究分类的不足,以及提高分类准确性,提出一种利用Word2Vec词嵌入技术,并使用多核卷积神经网络作为分类器的音乐歌词多情感分类方法。该方法首先结合音乐歌词文本,进行数据预处理和可视化分析;其次利用Word2Vec词嵌入提取歌词局部特征,构建特征情感向量,挖掘歌词中情感信息,将歌词转化为更利于分类器模型输入的词向量;最后在分类器中,选用卷积神经网络模型,并在此基础上采用不同高度卷积核的方式构建新模型以此得到多情感分类。结果表明:音乐歌词多情感分类的结果达到94.26%,与传统CNN相比,分类精确率提高了6.86%,取得了良好性能。 展开更多
关键词 自然语言处理 情感分类 卷积神经网络 词嵌入 文本分类 音乐歌词
下载PDF
基于计数模型的Word Embedding算法
10
作者 裴楠 王裴岩 张桂平 《沈阳航空航天大学学报》 2017年第2期66-72,共7页
Word Embedding是当今非常流行的用于文本处理任务的一种技术。基于计数模型的Word Embedding相比预测模型具有简单、快捷、易训练、善于捕捉词语相似性等优势。基于计数模型,选取2种上下文环境,运用2种权重计算方法和2种相似度计算方法... Word Embedding是当今非常流行的用于文本处理任务的一种技术。基于计数模型的Word Embedding相比预测模型具有简单、快捷、易训练、善于捕捉词语相似性等优势。基于计数模型,选取2种上下文环境,运用2种权重计算方法和2种相似度计算方法,构建了5种Word Embedding模型。在词语相似性任务上比较和分析了5种Word Embedding模型,发现采用降维策略后的词表达效果要优于降维前的词表达效果;5种模型中,选取窗口上下文,PMI权重计算方法和余弦相似度计算方法的Word Embedding模型在词语相似性任务上表现最为出色。将5种模型和基于预测的Skip-gram模型进行了对比,结果表明在选取训练向量维度为100维时,基于计数的大部分模型在词语相似性任务上可以达到和Skip-gram一样甚至更好的性能。 展开更多
关键词 词表达 计数模型 分布式词表达 词语相似性
下载PDF
Novel Representations of Word Embedding Based on the Zolu Function
11
作者 Jihua Lu Youcheng Zhang 《Journal of Beijing Institute of Technology》 EI CAS 2020年第4期526-530,共5页
Two learning models,Zolu-continuous bags of words(ZL-CBOW)and Zolu-skip-grams(ZL-SG),based on the Zolu function are proposed.The slope of Relu in word2vec has been changed by the Zolu function.The proposed models can ... Two learning models,Zolu-continuous bags of words(ZL-CBOW)and Zolu-skip-grams(ZL-SG),based on the Zolu function are proposed.The slope of Relu in word2vec has been changed by the Zolu function.The proposed models can process extremely large data sets as well as word2vec without increasing the complexity.Also,the models outperform several word embedding methods both in word similarity and syntactic accuracy.The method of ZL-CBOW outperforms CBOW in accuracy by 8.43%on the training set of capital-world,and by 1.24%on the training set of plural-verbs.Moreover,experimental simulations on word similarity and syntactic accuracy show that ZL-CBOW and ZL-SG are superior to LL-CBOW and LL-SG,respectively. 展开更多
关键词 Zolu function word embedding continuous bags of words word similarity accuracy
下载PDF
Statute Recommendation Based on Word Embedding
12
作者 Peitang Ling Zian Wang +4 位作者 Yi Feng Jidong Ge Mengting He Chuanyi Li Bin Luo 《国际计算机前沿大会会议论文集》 2019年第1期546-548,共3页
The statute recommendation problem is a sub problem of the automated decision system, which can help the legal staff to deal with the process of the case in an intelligent and automated way. In this paper, an improved... The statute recommendation problem is a sub problem of the automated decision system, which can help the legal staff to deal with the process of the case in an intelligent and automated way. In this paper, an improved common word similarity algorithm is proposed for normalization. Meanwhile, word mover’s distance (WMD) algorithm was applied to the similarity measurement and statute recommendation problem, and the problem scene which was originally used for classification was extended. Finally, a variety of recommendation strategies different from traditional collaborative filtering methods were proposed. The experimental results show that it achieves the best value of Fmeasure reaching 0.799. And the comparative experiment shows that WMD algorithm can achieve better results than TF-IDF and LDA algorithm. 展开更多
关键词 Statute RECOMMENDATION word embedding word mover’s DISTANCE COLLABORATIVE FILTERING
下载PDF
一种Word2vec构建词向量模型的实现方法 被引量:7
13
作者 席宁丽 朱丽佳 +2 位作者 王录通 陈俊 万晓容 《电脑与信息技术》 2023年第1期43-46,共4页
Word2vec是一种基于简单神经网络的自然语言处理方法,是一种词嵌入技术,可用于构建高维词向量。研究针对Word2vec词向量表示方法进行模型构建和分析,通过NLPCC2014语料训练,将词映射到高维词向量空间中,完成了Word2vec的功能实现以及可... Word2vec是一种基于简单神经网络的自然语言处理方法,是一种词嵌入技术,可用于构建高维词向量。研究针对Word2vec词向量表示方法进行模型构建和分析,通过NLPCC2014语料训练,将词映射到高维词向量空间中,完成了Word2vec的功能实现以及可视化输出。实验中进一步针对CBOW模型与Skip-gram模型,这两种Word2vec中的重要模型进行对比研究,输出结果表明:在通过大语料训练中文词向量时,Skip-gram模型在新词识别上具有明显优势,综合模型准确性与时间性能来说,总体可靠性更优。 展开更多
关键词 词向量 word2vec CBOW Skip-gram NLP
下载PDF
An Automated System to Predict Popular Cybersecurity News Using Document Embeddings
14
作者 Ramsha Saeed Saddaf Rubab +5 位作者 Sara Asif Malik M.Khan Saeed Murtaza Seifedine Kadry Yunyoung Nam Muhammad Attique Khan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2021年第5期533-547,共15页
The substantial competition among the news industries puts editors under the pressure of posting news articleswhich are likely to gain more user attention. Anticipating the popularity of news articles can help the edi... The substantial competition among the news industries puts editors under the pressure of posting news articleswhich are likely to gain more user attention. Anticipating the popularity of news articles can help the editorial teamsin making decisions about posting a news article. Article similarity extracted from the articles posted within a smallperiod of time is found to be a useful feature in existing popularity prediction approaches. This work proposesa new approach to estimate the popularity of news articles by adding semantics in the article similarity basedapproach of popularity estimation. A semantically enriched model is proposed which estimates news popularity bymeasuring cosine similarity between document embeddings of the news articles. Word2vec model has been used togenerate distributed representations of the news content. In this work, we define popularity as the number of timesa news article is posted on different websites. We collect data from different websites that post news concerning thedomain of cybersecurity and estimate the popularity of cybersecurity news. The proposed approach is comparedwith different models and it is shown that it outperforms the other models. 展开更多
关键词 embeddingS SEMANTICS cosine similarity POPULARITY word2vec
下载PDF
Standardization of Robot Instruction Elements Based on Conditional Random Fields and Word Embeddin
15
作者 Hengsheng Wang Zhengang Zhang +1 位作者 Jin Ren Tong Liu 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2019年第5期32-40,共9页
Natural language processing has got great progress recently. Controlling robots with spoken natural language has become expectable. With the reliability problem of this kind of control in mind a confirmation process o... Natural language processing has got great progress recently. Controlling robots with spoken natural language has become expectable. With the reliability problem of this kind of control in mind a confirmation process of natural language instruction should be included before carried out by the robot autonomously and the prototype dialog system was designed thus the standardization problem was raised for the natural and understandable language interaction. In the application background of remotely navigating a mobile robot inside a building with Chinese natural spoken language considering that as an important navigation element in instructions a place name can be expressed with different lexical terms in spoken language this paper proposes a model for substituting different alternatives of a place name with a standard one (called standardization). First a CRF (Conditional Random Fields) model is trained to label the term required be standardized then a trained word embedding model is to represent lexical terms as digital vectors. In the vector space similarity of lexical terms is defined and used to find out the most similar one to the term picked out to be standardized. Experiments show that the method proposed works well and the dialog system responses to confirm the instructions are natural and understandable. 展开更多
关键词 word embedding Conditional Random Fields ( CRFs ) STANDARDIZATION interaction Chinese NATURAL Spoken LANGUAGE (CNSL) NATURAL LANGUAGE Processing (NLP) human-robot
下载PDF
基于Word2vec和LSTM-SVM的数控机床故障报警预测 被引量:1
16
作者 王梓琦 张铫 +1 位作者 夏雨风 任杰文 《组合机床与自动化加工技术》 北大核心 2023年第4期71-75,81,共6页
规模自动化工业生产中的集群数控机床因各种故障导致停机而造成生产线效率的下降,若能及时准确地预测故障对数控机床进行预检预修有利于提高全线生产效率。在工业智能制造背景下,以数据驱动为支撑,数控机床积累的大量历史故障报警数据... 规模自动化工业生产中的集群数控机床因各种故障导致停机而造成生产线效率的下降,若能及时准确地预测故障对数控机床进行预检预修有利于提高全线生产效率。在工业智能制造背景下,以数据驱动为支撑,数控机床积累的大量历史故障报警数据为依托,设计了一种基于Word2vec和LSTM-SVM的故障报警预测方法对机床未来可能发生的故障进行预测。首先通过词嵌入技术将报警文本向量化,然后将报警向量作为输入构建长短期记忆神经网络(long short term memory network,LSTM)预测模型,并使用支持向量机(support vector machine,SVM)代替传统的softmax作为模型的末端分类器,实验结果表明该方法具有更高的预测准确率。 展开更多
关键词 故障预测 报警数据 词嵌入 长短期记忆神经网络 支持向量机
下载PDF
自然语言处理领域中的词嵌入方法综述 被引量:2
17
作者 曾骏 王子威 +2 位作者 于扬 文俊浩 高旻 《计算机科学与探索》 CSCD 北大核心 2024年第1期24-43,共20页
词嵌入作为自然语言处理任务的第一步,其目的是将输入的自然语言文本转换为模型可以处理的数值向量,即词向量,也称词的分布式表示。词向量作为自然语言处理任务的根基,是完成一切自然语言处理任务的前提。然而,国内外针对词嵌入方法的... 词嵌入作为自然语言处理任务的第一步,其目的是将输入的自然语言文本转换为模型可以处理的数值向量,即词向量,也称词的分布式表示。词向量作为自然语言处理任务的根基,是完成一切自然语言处理任务的前提。然而,国内外针对词嵌入方法的综述文献大多只关注于不同词嵌入方法本身的技术路线,而未能将词嵌入的前置分词方法以及词嵌入方法完整的演变趋势进行分析与概述。以word2vec模型和Transformer模型作为划分点,从生成的词向量是否能够动态地改变其内隐的语义信息来适配输入句子的整体语义这一角度,将词嵌入方法划分为静态词嵌入方法和动态词嵌入方法,并对此展开讨论。同时,针对词嵌入中的分词方法,包括整词切分和子词切分,进行了对比和分析;针对训练词向量所使用的语言模型,从概率语言模型到神经概率语言模型再到如今的深度上下文语言模型的演化,进行了详细列举和阐述;针对预训练语言模型时使用的训练策略进行了总结和探讨。最后,总结词向量质量的评估方法,分析词嵌入方法的当前现状并对其未来发展方向进行展望。 展开更多
关键词 词向量 词嵌入方法 自然语言处理 语言模型 分词 词向量评估
下载PDF
基于BERT的两次注意力机制远程监督关系抽取
18
作者 袁泉 陈昌平 +1 位作者 陈泽 詹林峰 《计算机应用》 CSCD 北大核心 2024年第4期1080-1085,共6页
针对词向量语义信息不完整以及文本特征抽取时的一词多义问题,提出基于BERT(Bidirectional Encoder Representation from Transformer)的两次注意力加权算法(TARE)。首先,在词向量编码阶段,通过构建Q、K、V矩阵使用自注意力机制动态编... 针对词向量语义信息不完整以及文本特征抽取时的一词多义问题,提出基于BERT(Bidirectional Encoder Representation from Transformer)的两次注意力加权算法(TARE)。首先,在词向量编码阶段,通过构建Q、K、V矩阵使用自注意力机制动态编码算法,为当前词的词向量捕获文本前后词语义信息;其次,在模型输出句子级特征向量后,利用定位信息符提取全连接层对应参数,构建关系注意力矩阵;最后,运用句子级注意力机制算法为每个句子级特征向量添加不同的注意力分数,提高句子级特征的抗噪能力。实验结果表明:在NYT-10m数据集上,与基于对比学习框架的CIL(Contrastive Instance Learning)算法相比,TARE的F1值提升了4.0个百分点,按置信度降序排列后前100、200和300条数据精准率Precision@N的平均值(P@M)提升了11.3个百分点;在NYT-10d数据集上,与基于注意力机制的PCNN-ATT(Piecewise Convolutional Neural Network algorithm based on ATTention mechanism)算法相比,精准率与召回率曲线下的面积(AUC)提升了4.8个百分点,P@M值提升了2.1个百分点。在主流的远程监督关系抽取(DSER)任务中,TARE有效地提升了模型对数据特征的学习能力。 展开更多
关键词 远程监督 关系抽取 注意力机制 词向量特征 全连接层
下载PDF
Visual-Based Character Embedding via Principal Component Analysis
19
作者 Linchao He Dejun Zhang +4 位作者 Long Tian Fei Han Mengting Luo Yilin Chen Yiqi Wu 《国际计算机前沿大会会议论文集》 2018年第1期16-16,共1页
下载PDF
基于Word2Vec和Bi-GRU的高职线上教评情感分析法试探
20
作者 李淼冰 王威 王成成 《广东水利电力职业技术学院学报》 2023年第3期73-77,共5页
为提高高职线上教评情感分析的准确度和效率,提出基于双向门控循环单元(Bi-GRU)网络的教评情感分析法。该方法利用Skip-gram神经网络学习教育领域特定的词嵌入向量,再利用两个相同架构的Bi-GRU网络,从不同角度实现对学生反馈的细粒度分... 为提高高职线上教评情感分析的准确度和效率,提出基于双向门控循环单元(Bi-GRU)网络的教评情感分析法。该方法利用Skip-gram神经网络学习教育领域特定的词嵌入向量,再利用两个相同架构的Bi-GRU网络,从不同角度实现对学生反馈的细粒度分析。实验结果表明,该方法内容分类和情感分类的准确度分别达到97%和95%,显著优于支持向量机(SVM)、长短时记忆网络(LSTM)等其他方法。 展开更多
关键词 教学评价 情感分析 双向门控循环单元 词嵌入向量 情感极性 细粒度分析
下载PDF
上一页 1 2 33 下一页 到第
使用帮助 返回顶部