期刊文献+
共找到37篇文章
< 1 2 >
每页显示 20 50 100
A Joint Entity Relation Extraction Model Based on Relation Semantic Template Automatically Constructed
1
作者 Wei Liu Meijuan Yin +1 位作者 Jialong Zhang Lunchong Cui 《Computers, Materials & Continua》 SCIE EI 2024年第1期975-997,共23页
The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of... The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of defining the semantic template of relation manually is particularly prominent in the extraction effect because it can obtain the deep semantic information of relation.However,this method has some problems,such as relying on expert experience and poor portability.Inspired by the rule-based entity relation extraction method,this paper proposes a joint entity relation extraction model based on a relation semantic template automatically constructed,which is abbreviated as RSTAC.This model refines the extraction rules of relation semantic templates from relation corpus through dependency parsing and realizes the automatic construction of relation semantic templates.Based on the relation semantic template,the process of relation classification and triplet extraction is constrained,and finally,the entity relation triplet is obtained.The experimental results on the three major Chinese datasets of DuIE,SanWen,and FinRE showthat the RSTAC model successfully obtains rich deep semantics of relation,improves the extraction effect of entity relation triples,and the F1 scores are increased by an average of 0.96% compared with classical joint extraction models such as CasRel,TPLinker,and RFBFN. 展开更多
关键词 Natural language processing deep learning information extraction relation extraction relation semantic template
下载PDF
Graph Convolutional Networks Embedding Textual Structure Information for Relation Extraction
2
作者 Chuyuan Wei Jinzhe Li +2 位作者 Zhiyuan Wang Shanshan Wan Maozu Guo 《Computers, Materials & Continua》 SCIE EI 2024年第5期3299-3314,共16页
Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,... Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous. 展开更多
关键词 relation extraction graph convolutional neural networks dependency tree dynamic structure attention
下载PDF
A Graph with Adaptive AdjacencyMatrix for Relation Extraction
3
作者 Run Yang YanpingChen +1 位作者 Jiaxin Yan Yongbin Qin 《Computers, Materials & Continua》 SCIE EI 2024年第9期4129-4147,共19页
The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes de... The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes dependency information specific to the two named entities.In related work,graph convolutional neural networks are widely adopted to learn semantic dependencies,where a dependency tree initializes the adjacency matrix.However,this approach has two main issues.First,parsing a sentence heavily relies on external toolkits,which can be errorprone.Second,the dependency tree only encodes the syntactical structure of a sentence,which may not align with the relational semantic expression.In this paper,we propose an automatic graph learningmethod to autonomously learn a sentence’s structural information.Instead of using a fixed adjacency matrix initialized by a dependency tree,we introduce an Adaptive Adjacency Matrix to encode the semantic dependency between tokens.The elements of thismatrix are dynamically learned during the training process and optimized by task-relevant learning objectives,enabling the construction of task-relevant semantic dependencies within a sentence.Our model demonstrates superior performance on the TACRED and SemEval 2010 datasets,surpassing previous works by 1.3%and 0.8%,respectively.These experimental results show that our model excels in the relation extraction task,outperforming prior models. 展开更多
关键词 relation extraction graph convolutional neural network adaptive adjacency matrix
下载PDF
A Two-Phase Paradigm for Joint Entity-Relation Extraction 被引量:2
4
作者 Bin Ji Hao Xu +4 位作者 Jie Yu Shasha Li JunMa Yuke Ji Huijun Liu 《Computers, Materials & Continua》 SCIE EI 2023年第1期1303-1318,共16页
An exhaustive study has been conducted to investigate span-based models for the joint entity and relation extraction task.However,these models sample a large number of negative entities and negative relations during t... An exhaustive study has been conducted to investigate span-based models for the joint entity and relation extraction task.However,these models sample a large number of negative entities and negative relations during the model training,which are essential but result in grossly imbalanced data distributions and in turn cause suboptimal model performance.In order to address the above issues,we propose a two-phase paradigm for the span-based joint entity and relation extraction,which involves classifying the entities and relations in the first phase,and predicting the types of these entities and relations in the second phase.The two-phase paradigm enables our model to significantly reduce the data distribution gap,including the gap between negative entities and other entities,aswell as the gap between negative relations and other relations.In addition,we make the first attempt at combining entity type and entity distance as global features,which has proven effective,especially for the relation extraction.Experimental results on several datasets demonstrate that the span-based joint extraction model augmented with the two-phase paradigm and the global features consistently outperforms previous state-ofthe-art span-based models for the joint extraction task,establishing a new standard benchmark.Qualitative and quantitative analyses further validate the effectiveness the proposed paradigm and the global features. 展开更多
关键词 Joint extraction span-based named entity recognition relation extraction data distribution global features
下载PDF
Local-to-Global Causal Reasoning for Cross-Document Relation Extraction
5
作者 Haoran Wu Xiuyi Chen +3 位作者 Zefa Hu Jing Shi Shuang Xu Bo Xu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第7期1608-1621,共14页
Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing nois... Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing noisy texts.Previous studies focus on the attention mechanism to construct the connection between different text features through semantic similarity.However,similarity-based methods cannot distinguish valid information from highly similar retrieved documents well.How to design an effective algorithm to implement aggregated reasoning in confusing information with similar features still remains an open issue.To address this problem,we design a novel local-toglobal causal reasoning(LGCR)network for cross-document RE,which enables efficient distinguishing,filtering and global reasoning on complex information from a causal perspective.Specifically,we propose a local causal estimation algorithm to estimate the causal effect,which is the first trial to use the causal reasoning independent of feature similarity to distinguish between confusing and valid information in cross-document RE.Furthermore,based on the causal effect,we propose a causality guided global reasoning algorithm to filter the confusing information and achieve global reasoning.Experimental results under the closed and the open settings of the large-scale dataset Cod RED demonstrate our LGCR network significantly outperforms the state-ofthe-art methods and validate the effectiveness of causal reasoning in confusing information processing. 展开更多
关键词 Causal reasoning cross document graph reasoning relation extraction(RE)
下载PDF
Qualia Role-Based Quantity Relation Extraction for Solving Algebra Story Problems
6
作者 Bin He Hao Meng +2 位作者 Zhejin Zhang Rui Liu Ting Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第7期403-419,共17页
A qualia role-based entity-dependency graph(EDG)is proposed to represent and extract quantity relations for solving algebra story problems stated in Chinese.Traditional neural solvers use end-to-end models to translat... A qualia role-based entity-dependency graph(EDG)is proposed to represent and extract quantity relations for solving algebra story problems stated in Chinese.Traditional neural solvers use end-to-end models to translate problem texts into math expressions,which lack quantity relation acquisition in sophisticated scenarios.To address the problem,the proposed method leverages EDG to represent quantity relations hidden in qualia roles of math objects.Algorithms were designed for EDG generation and quantity relation extraction for solving algebra story problems.Experimental result shows that the proposedmethod achieved an average accuracy of 82.2%on quantity relation extraction compared to 74.5%of baseline method.Another prompt learning result shows a 5%increase obtained in problem solving by injecting the extracted quantity relations into the baseline neural solvers. 展开更多
关键词 Quantity relation extraction algebra story problem solving qualia role entity dependency graph
下载PDF
Adversarial Learning for Distant Supervised Relation Extraction 被引量:6
7
作者 Daojian Zeng Yuan Dai +2 位作者 Feng Li R.Simon Sherratt Jin Wang 《Computers, Materials & Continua》 SCIE EI 2018年第4期121-136,共16页
Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which... Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which inevitably brings the noise of artificial class NA into classification process.To address the shortcoming,the classifier with ranking loss is employed to DSRE.Uniformly randomly selecting a relation or heuristically selecting the highest score among all incorrect relations are two common methods for generating a negative class in the ranking loss function.However,the majority of the generated negative class can be easily discriminated from positive class and will contribute little towards the training.Inspired by Generative Adversarial Networks(GANs),we use a neural network as the negative class generator to assist the training of our desired model,which acts as the discriminator in GANs.Through the alternating optimization of generator and discriminator,the generator is learning to produce more and more discriminable negative classes and the discriminator has to become better as well.This framework is independent of the concrete form of generator and discriminator.In this paper,we use a two layers fully-connected neural network as the generator and the Piecewise Convolutional Neural Networks(PCNNs)as the discriminator.Experiment results show that our proposed GAN-based method is effective and performs better than state-of-the-art methods. 展开更多
关键词 relation extraction generative adversarial networks distant supervision piecewise convolutional neural networks pair-wise ranking loss
下载PDF
A Knowledge-Enriched and Span-Based Network for Joint Entity and Relation Extraction 被引量:4
8
作者 Kun Ding Shanshan Liu +4 位作者 Yuhao Zhang Hui Zhang Xiaoxiong Zhang Tongtong Wu Xiaolei Zhou 《Computers, Materials & Continua》 SCIE EI 2021年第7期377-389,共13页
The joint extraction of entities and their relations from certain texts plays a significant role in most natural language processes.For entity and relation extraction in a specific domain,we propose a hybrid neural fr... The joint extraction of entities and their relations from certain texts plays a significant role in most natural language processes.For entity and relation extraction in a specific domain,we propose a hybrid neural framework consisting of two parts:a span-based model and a graph-based model.The span-based model can tackle overlapping problems compared with BILOU methods,whereas the graph-based model treats relation prediction as graph classification.Our main contribution is to incorporate external lexical and syntactic knowledge of a specific domain,such as domain dictionaries and dependency structures from texts,into end-to-end neural models.We conducted extensive experiments on a Chinese military entity and relation extraction corpus.The results show that the proposed framework outperforms the baselines with better performance in terms of entity and relation prediction.The proposed method provides insight into problems with the joint extraction of entities and their relations. 展开更多
关键词 Entity recognition relation extraction dependency parsing 1 Introduction
下载PDF
Lexicalized Dependency Paths Based Supervised Learning for Relation Extraction 被引量:2
9
作者 Huiyu Sun Ralph Grishman 《Computer Systems Science & Engineering》 SCIE EI 2022年第12期861-870,共10页
Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this en... Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this end, we propose a training representation based on the dependency paths between entities in adependency tree which we call lexicalized dependency paths (LDPs). We showthat this representation is fast, efficient and transparent. We further propose representations utilizing entity types and its subtypes to refine our model and alleviatethe data sparsity problem. We apply lexicalized dependency paths to supervisedlearning using the ACE corpus and show that it can achieve similar performancelevel to other state-of-the-art methods and even surpass them on severalcategories. 展开更多
关键词 relation extraction dependency paths lexicalized dependency paths supervised learning rule-based models
下载PDF
Joint Self-Attention Based Neural Networks for Semantic Relation Extraction 被引量:1
10
作者 Jun Sun Yan Li +5 位作者 Yatian Shen Wenke Ding Xianjin Shi Lei Zhang Xiajiong Shen Jing He 《Journal of Information Hiding and Privacy Protection》 2019年第2期69-75,共7页
Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this pape... Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this paper,we propose a novel neural network model for semantic relation classification called joint self-attention bi-LSTM(SA-Bi-LSTM)to model the internal structure of the sentence to obtain the importance of each word of the sentence without relying on additional information,and capture Long-distance dependence on semantics.We conduct experiments using the SemEval-2010 Task 8 dataset.Extensive experiments and the results demonstrated that the proposed method is effective against relation classification,which can obtain state-ofthe-art classification accuracy just with minimal feature engineering. 展开更多
关键词 Self-attention relation extraction neural networks
下载PDF
Knowledge enhanced graph inference network based entity-relation extraction and knowledge graph construction for industrial domain
11
作者 Zhulin HAN Jian WANG 《Frontiers of Engineering Management》 CSCD 2024年第1期143-158,共16页
With the escalating complexity in production scenarios, vast amounts of production information are retained within enterprises in the industrial domain. Probing questions of how to meticulously excavate value from com... With the escalating complexity in production scenarios, vast amounts of production information are retained within enterprises in the industrial domain. Probing questions of how to meticulously excavate value from complex document information and establish coherent information links arise. In this work, we present a framework for knowledge graph construction in the industrial domain, predicated on knowledge-enhanced document-level entity and relation extraction. This approach alleviates the shortage of annotated data in the industrial domain and models the interplay of industrial documents. To augment the accuracy of named entity recognition, domain-specific knowledge is incorporated into the initialization of the word embedding matrix within the bidirectional long short-term memory conditional random field (BiLSTM-CRF) framework. For relation extraction, this paper introduces the knowledge-enhanced graph inference (KEGI) network, a pioneering method designed for long paragraphs in the industrial domain. This method discerns intricate interactions among entities by constructing a document graph and innovatively integrates knowledge representation into both node construction and path inference through TransR. On the application stratum, BiLSTM-CRF and KEGI are utilized to craft a knowledge graph from a knowledge representation model and Chinese fault reports for a steel production line, specifically SPOnto and SPFRDoc. The F1 value for entity and relation extraction has been enhanced by 2% to 6%. The quality of the extracted knowledge graph complies with the requirements of real-world production environment applications. The results demonstrate that KEGI can profoundly delve into production reports, extracting a wealth of knowledge and patterns, thereby providing a comprehensive solution for production management. 展开更多
关键词 knowledge graph construction INDUSTRIAL BiLSTM-CRF document-level relation extraction graph inference
原文传递
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
12
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Deep learning models for spatial relation extraction in text
13
作者 Kehan Wu Xueying Zhang +1 位作者 Yulong Dang Peng Ye 《Geo-Spatial Information Science》 SCIE EI CSCD 2023年第1期58-70,共13页
Spatial relation extraction is the process of identifying geographic entities from text and determining their corresponding spatial relations.Traditional spatial relation extraction mainly uses rule-based pattern matc... Spatial relation extraction is the process of identifying geographic entities from text and determining their corresponding spatial relations.Traditional spatial relation extraction mainly uses rule-based pattern matching,supervised learning-based or unsupervised learning-based methods.However,these methods suffer from poor time-sensitive,high labor cost and high dependence on large-scale data.With the development of pre-trained language models greatly alleviating the shortcomings of traditional methods,supervised learning methods incorporating pre-trained language models have become the mainstream relation extraction methods.Pipeline extraction and joint extraction,as the two most dominant ideas of relation extraction,both have obtained good performance on different datasets,and whether to share the contextual information of entities and relations is the main differences between the two ideas.In this paper,we compare the performance of two ideas oriented to spatial relation extraction based on Chinese corpus data in the field of geography and verify which method based on pre-trained language models is more suitable for Chinese spatial relation extraction.We fine-tuned the hyperparameters of the two models to optimize the extraction accuracy before the comparison experiments.The results of the comparison experiments show that pipeline extraction performs better than joint extraction of spatial relation extraction for Chinese text data with sentence granularity,because different tasks have different focus on contextual information,and it is difficult to take account into the needs of both tasks by sharing contextual information.In addition,we further compare the performance of the two models with the rule-based template approach in extracting topological,directional and distance relations,summarize the shortcomings of this experiment and provide an outlook for future work. 展开更多
关键词 Spatial relation extraction pre-trained language model pipeline extraction joint extraction
原文传递
Relation Extraction Based on Prompt Information and Feature Reuse
14
作者 Ping Feng Xin Zhang +2 位作者 Jian Zhao Yingying Wang Biao Huang 《Data Intelligence》 EI 2023年第3期824-840,共17页
To alleviate the problem of under-utilization features of sentence-level relation extraction,which leads to insufficient performance of the pre-trained language model and underutilization of the feature vector,a sente... To alleviate the problem of under-utilization features of sentence-level relation extraction,which leads to insufficient performance of the pre-trained language model and underutilization of the feature vector,a sentence-level relation extraction method based on adding prompt information and feature reuse is proposed.At first,in addition to the pair of nominals and sentence information,a piece of prompt information is added,and the overall feature information consists of sentence information,entity pair information,and prompt information,and then the features are encoded by the pre-trained language model ROBERTA.Moreover,in the pre-trained language model,BIGRU is also introduced in the composition of the neural network to extract information,and the feature information is passed through the neural network to form several sets of feature vectors.After that,these feature vectors are reused in different combinations to form multiple outputs,and the outputs are aggregated using ensemble-learning soft voting to perform relation classification.In addition to this,the sum of cross-entropy,KL divergence,and negative log-likelihood loss is used as the final loss function in this paper.In the comparison experiments,the model based on adding prompt information and feature reuse achieved higher results of the SemEval-2010 task 8 relational dataset. 展开更多
关键词 relation extraction language model prompt information feature reuse loss function
原文传递
Relational Turkish Text Classification Using Distant Supervised Entities and Relations
15
作者 Halil Ibrahim Okur Kadir Tohma Ahmet Sertbas 《Computers, Materials & Continua》 SCIE EI 2024年第5期2209-2228,共20页
Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved throu... Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research. 展开更多
关键词 Text classification relation extraction NER distant supervision deep learning machine learning
下载PDF
Implicit Modality Mining: An End-to-End Method for Multimodal Information Extraction
16
作者 Jinle Lu Qinglang Guo 《Journal of Electronic Research and Application》 2024年第2期124-139,共16页
Multimodal named entity recognition(MNER)and relation extraction(MRE)are key in social media analysis but face challenges like inefficient visual processing and non-optimal modality interaction.(1)Heavy visual embeddi... Multimodal named entity recognition(MNER)and relation extraction(MRE)are key in social media analysis but face challenges like inefficient visual processing and non-optimal modality interaction.(1)Heavy visual embedding:the process of visual embedding is both time and computationally expensive due to the prerequisite extraction of explicit visual cues from the original image before input into the multimodal model.Consequently,these approaches cannot achieve efficient online reasoning;(2)suboptimal interaction handling:the prevalent method of managing interaction between different modalities typically relies on the alternation of self-attention and cross-attention mechanisms or excessive dependence on the gating mechanism.This explicit modeling method may fail to capture some nuanced relations between image and text,ultimately undermining the model’s capability to extract optimal information.To address these challenges,we introduce Implicit Modality Mining(IMM),a novel end-to-end framework for fine-grained image-text correlation without heavy visual embedders.IMM uses an Implicit Semantic Alignment module with a Transformer for cross-modal clues and an Insert-Activation module to effectively utilize these clues.Our approach achieves state-of-the-art performance on three datasets. 展开更多
关键词 MULTIMODAL Named entity recognition relation extraction Patch projection
下载PDF
A survey on neural relation extraction 被引量:12
17
作者 LIU Kang 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2020年第10期1971-1989,共19页
Relation extraction is a key task for knowledge graph construction and natural language processing,which aims to extract meaningful relational information between entities from plain texts.With the development of deep... Relation extraction is a key task for knowledge graph construction and natural language processing,which aims to extract meaningful relational information between entities from plain texts.With the development of deep learning,many neural relation extraction models were proposed recently.This paper introduces a survey on the task of neural relation extraction,including task description,widely used evaluation datasets,metrics,typical methods,challenges and recent research progresses.We mainly focus on four recent research problems:(1)how to learn the semantic representations from the given sentences for the target relation,(2)how to train a neural relation extraction model based on insufficient labeled instances,(3)how to extract relations across sentences or in a document and(4)how to jointly extract relations and corresponding entities?Finally,we give out our conclusion and future research issues. 展开更多
关键词 knowledge graph relation extraction event extraction and information extraction
原文传递
Nested relation extraction with iterative neural network 被引量:6
18
作者 Yixuan CAO Dian CHEN +2 位作者 Zhengqi XU Hongwei LI Ping LUO 《Frontiers of Computer Science》 SCIE EI CSCD 2021年第3期109-122,共14页
Most existing researches on relation extraction focus on binary flat relations like Bomln relation between a Person and a Location.But a large portion of objective facts de-scribed in natural language are complex,espe... Most existing researches on relation extraction focus on binary flat relations like Bomln relation between a Person and a Location.But a large portion of objective facts de-scribed in natural language are complex,especially in professional documents in fields such as finance and biomedicine that require precise expressions.For example,“the GDP of the United States in 2018 grew 2.9%compared with 2017”describes a growth rate relation between two other relations about the economic index,which is beyond the expressive power of binary flat relations.Thus,we propose the nested relation extraction problem and formulate it as a directed acyclic graph(DAG)structure extraction problem.Then,we propose a solution using the Iterative Neural Network which extracts relations layer by layer.The proposed solution achieves 78.98 and 97.89 FI scores on two nested relation extraction tasks,namely semantic cause-and-efFect relation extraction and formula extraction.Furthermore,we observe that nested relations are usually expressed in long sentences where entities are mentioned repetitively,which makes the annotation difficult and error-prone.Hence,we extend our model to incorporate a mention-insensitive mode that only requires annotations of relations on entity concepts(instead of exact mentions)while preserving most of its performance.Our mention-insensitive model performs better than the mention sensitive model when the random level in mention selection is higher than 0.3. 展开更多
关键词 nested relation extraction mention insensitive relation iterative neural network
原文传递
Event Temporal Relation Extraction with Attention Mechanism and Graph Neural Network 被引量:2
19
作者 Xiaoliang Xu Tong Gao +1 位作者 Yuxiang Wang Xinle Xuan 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第1期79-90,共12页
Event temporal relation extraction is an important part of natural language processing.Many models are being used in this task with the development of deep learning.However,most of the existing methods cannot accurate... Event temporal relation extraction is an important part of natural language processing.Many models are being used in this task with the development of deep learning.However,most of the existing methods cannot accurately obtain the degree of association between different tokens and events,and event-related information cannot be effectively integrated.In this paper,we propose an event information integration model that integrates event information through multilayer bidirectional long short-term memory(Bi-LSTM)and attention mechanism.Although the above scheme can improve the extraction performance,it can still be further optimized.To further improve the performance of the previous scheme,we propose a novel relational graph attention network that incorporates edge attributes.In this approach,we first build a semantic dependency graph through dependency parsing,model a semantic graph that considers the edges’attributes by using top-k attention mechanisms to learn hidden semantic contextual representations,and finally predict event temporal relations.We evaluate proposed models on the TimeBank-Dense dataset.Compared to previous baselines,the Micro-F1 scores obtained by our models improve by 3.9%and 14.5%,respectively. 展开更多
关键词 temporal relation extraction neural network attention mechanism graph attention network
原文传递
Adversarial Training for Supervised Relation Extraction 被引量:2
20
作者 Yanhua Yu Kanghao He Jie Li 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2022年第3期610-618,共9页
Most supervised methods for relation extraction(RE) involve time-consuming human annotation. Distant supervision for RE is an efficient method to obtain large corpora that contains thousands of instances and various r... Most supervised methods for relation extraction(RE) involve time-consuming human annotation. Distant supervision for RE is an efficient method to obtain large corpora that contains thousands of instances and various relations. However, the existing approaches rely heavily on knowledge bases(e.g., Freebase), thereby introducing data noise. Various relations and noisy labeling instances make the issue difficult to solve. In this study, we propose a model based on a piecewise convolution neural network with adversarial training. Inspired by generative adversarial networks, we adopt a heuristic algorithm to identify noisy datasets and apply adversarial training to RE. Experiments on the extended dataset of SemEval-2010 Task 8 show that our model can obtain more accurate training data for RE and significantly outperforms several competitive baseline models. Our model has an F1 score of 89.61%. 展开更多
关键词 relation extraction piecewise convolution neural network adversarial training generative adversarial network
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部