The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of...The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of defining the semantic template of relation manually is particularly prominent in the extraction effect because it can obtain the deep semantic information of relation.However,this method has some problems,such as relying on expert experience and poor portability.Inspired by the rule-based entity relation extraction method,this paper proposes a joint entity relation extraction model based on a relation semantic template automatically constructed,which is abbreviated as RSTAC.This model refines the extraction rules of relation semantic templates from relation corpus through dependency parsing and realizes the automatic construction of relation semantic templates.Based on the relation semantic template,the process of relation classification and triplet extraction is constrained,and finally,the entity relation triplet is obtained.The experimental results on the three major Chinese datasets of DuIE,SanWen,and FinRE showthat the RSTAC model successfully obtains rich deep semantics of relation,improves the extraction effect of entity relation triples,and the F1 scores are increased by an average of 0.96% compared with classical joint extraction models such as CasRel,TPLinker,and RFBFN.展开更多
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e...In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.展开更多
Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,...Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.展开更多
The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes de...The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes dependency information specific to the two named entities.In related work,graph convolutional neural networks are widely adopted to learn semantic dependencies,where a dependency tree initializes the adjacency matrix.However,this approach has two main issues.First,parsing a sentence heavily relies on external toolkits,which can be errorprone.Second,the dependency tree only encodes the syntactical structure of a sentence,which may not align with the relational semantic expression.In this paper,we propose an automatic graph learningmethod to autonomously learn a sentence’s structural information.Instead of using a fixed adjacency matrix initialized by a dependency tree,we introduce an Adaptive Adjacency Matrix to encode the semantic dependency between tokens.The elements of thismatrix are dynamically learned during the training process and optimized by task-relevant learning objectives,enabling the construction of task-relevant semantic dependencies within a sentence.Our model demonstrates superior performance on the TACRED and SemEval 2010 datasets,surpassing previous works by 1.3%and 0.8%,respectively.These experimental results show that our model excels in the relation extraction task,outperforming prior models.展开更多
An exhaustive study has been conducted to investigate span-based models for the joint entity and relation extraction task.However,these models sample a large number of negative entities and negative relations during t...An exhaustive study has been conducted to investigate span-based models for the joint entity and relation extraction task.However,these models sample a large number of negative entities and negative relations during the model training,which are essential but result in grossly imbalanced data distributions and in turn cause suboptimal model performance.In order to address the above issues,we propose a two-phase paradigm for the span-based joint entity and relation extraction,which involves classifying the entities and relations in the first phase,and predicting the types of these entities and relations in the second phase.The two-phase paradigm enables our model to significantly reduce the data distribution gap,including the gap between negative entities and other entities,aswell as the gap between negative relations and other relations.In addition,we make the first attempt at combining entity type and entity distance as global features,which has proven effective,especially for the relation extraction.Experimental results on several datasets demonstrate that the span-based joint extraction model augmented with the two-phase paradigm and the global features consistently outperforms previous state-ofthe-art span-based models for the joint extraction task,establishing a new standard benchmark.Qualitative and quantitative analyses further validate the effectiveness the proposed paradigm and the global features.展开更多
Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing nois...Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing noisy texts.Previous studies focus on the attention mechanism to construct the connection between different text features through semantic similarity.However,similarity-based methods cannot distinguish valid information from highly similar retrieved documents well.How to design an effective algorithm to implement aggregated reasoning in confusing information with similar features still remains an open issue.To address this problem,we design a novel local-toglobal causal reasoning(LGCR)network for cross-document RE,which enables efficient distinguishing,filtering and global reasoning on complex information from a causal perspective.Specifically,we propose a local causal estimation algorithm to estimate the causal effect,which is the first trial to use the causal reasoning independent of feature similarity to distinguish between confusing and valid information in cross-document RE.Furthermore,based on the causal effect,we propose a causality guided global reasoning algorithm to filter the confusing information and achieve global reasoning.Experimental results under the closed and the open settings of the large-scale dataset Cod RED demonstrate our LGCR network significantly outperforms the state-ofthe-art methods and validate the effectiveness of causal reasoning in confusing information processing.展开更多
A qualia role-based entity-dependency graph(EDG)is proposed to represent and extract quantity relations for solving algebra story problems stated in Chinese.Traditional neural solvers use end-to-end models to translat...A qualia role-based entity-dependency graph(EDG)is proposed to represent and extract quantity relations for solving algebra story problems stated in Chinese.Traditional neural solvers use end-to-end models to translate problem texts into math expressions,which lack quantity relation acquisition in sophisticated scenarios.To address the problem,the proposed method leverages EDG to represent quantity relations hidden in qualia roles of math objects.Algorithms were designed for EDG generation and quantity relation extraction for solving algebra story problems.Experimental result shows that the proposedmethod achieved an average accuracy of 82.2%on quantity relation extraction compared to 74.5%of baseline method.Another prompt learning result shows a 5%increase obtained in problem solving by injecting the extracted quantity relations into the baseline neural solvers.展开更多
Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved throu...Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.展开更多
Background and objective:In northern China's cold regions,the prevalence of metabolic dysfunction-associated steatotic liver disease(MASLD)exceeds 50%,significantly higher than the national and global rates.MASLD ...Background and objective:In northern China's cold regions,the prevalence of metabolic dysfunction-associated steatotic liver disease(MASLD)exceeds 50%,significantly higher than the national and global rates.MASLD is an important risk factor for cardiovascular and cerebrovascular diseases,including coronary heart disease,stroke,and tumors,with no specific therapeutic drugs currently available.The ethanol extract of cassia seed(CSEE)has shown promise in lowering blood lipids and improving hepatic steatosis,but its mechanism in treating MASLD remains underexplored.This study aims to investigate the therapeutic effects and mechanisms of CSEE.Methods:MASLD models were established in male Wistar rats and golden hamsters using a high fat diet(HFD).CSEE(10,50,250 mg/kg)was administered via gavage for six weeks.Serum levels of total cholesterol(TC),triglyceride(TG),low-density lipoprotein cholesterol(LDL-C),high-density lipoprotein cholesterol(HDL-C),aspartate aminotransferase(AST),and alanine aminotransferase(ALT),as well as liver TC and TG,were measured using biochemical kits.Histopathological changes in the liver were evaluated using Oil Red O staining,Hematoxylin-eosin(H&E)staining,and transmission electron microscopy(TEM).HepG2 cell viability was assessed using the cell counting kit-8(CCK8)and Calcein-AM/PI staining.Network pharmacology was used to analyze drug-disease targets,and western blotting was used to confirm these predictions.Results:CSEE treatment significantly reduced serum levels of TC,TG,LDL-C,ALT,and AST,and improved liver weight,liver index,and hepatic lipid deposition in rats and golden hamsters.In addition,CSEE alleviated free fatty acid(FFA)-induced lipid deposition in HepG2 cells.Molecular biology experiments demonstrated that CSEE increased the protein levels of p-AMPK,p-ACC,PPARα,CPT1A,PI3K P110 and p-AKT,while decreasing the protein levels of SREBP1,FASN,C/EBPα,and PPARγ,thus improving hepatic lipid metabolism and reducing lipid deposition.The beneficial effects of CSEE were reversed by small molecule inhibitors of the signaling pathways in vitro.Conclusion:CSEE improves liver lipid metabolism and reduces lipid droplet deposition in Wistar rats and golden hamsters with MASLD by activating hepatic AMPK,PPARα,and PI3K/AKT signaling pathways.展开更多
Multimodal named entity recognition(MNER)and relation extraction(MRE)are key in social media analysis but face challenges like inefficient visual processing and non-optimal modality interaction.(1)Heavy visual embeddi...Multimodal named entity recognition(MNER)and relation extraction(MRE)are key in social media analysis but face challenges like inefficient visual processing and non-optimal modality interaction.(1)Heavy visual embedding:the process of visual embedding is both time and computationally expensive due to the prerequisite extraction of explicit visual cues from the original image before input into the multimodal model.Consequently,these approaches cannot achieve efficient online reasoning;(2)suboptimal interaction handling:the prevalent method of managing interaction between different modalities typically relies on the alternation of self-attention and cross-attention mechanisms or excessive dependence on the gating mechanism.This explicit modeling method may fail to capture some nuanced relations between image and text,ultimately undermining the model’s capability to extract optimal information.To address these challenges,we introduce Implicit Modality Mining(IMM),a novel end-to-end framework for fine-grained image-text correlation without heavy visual embedders.IMM uses an Implicit Semantic Alignment module with a Transformer for cross-modal clues and an Insert-Activation module to effectively utilize these clues.Our approach achieves state-of-the-art performance on three datasets.展开更多
Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which...Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which inevitably brings the noise of artificial class NA into classification process.To address the shortcoming,the classifier with ranking loss is employed to DSRE.Uniformly randomly selecting a relation or heuristically selecting the highest score among all incorrect relations are two common methods for generating a negative class in the ranking loss function.However,the majority of the generated negative class can be easily discriminated from positive class and will contribute little towards the training.Inspired by Generative Adversarial Networks(GANs),we use a neural network as the negative class generator to assist the training of our desired model,which acts as the discriminator in GANs.Through the alternating optimization of generator and discriminator,the generator is learning to produce more and more discriminable negative classes and the discriminator has to become better as well.This framework is independent of the concrete form of generator and discriminator.In this paper,we use a two layers fully-connected neural network as the generator and the Piecewise Convolutional Neural Networks(PCNNs)as the discriminator.Experiment results show that our proposed GAN-based method is effective and performs better than state-of-the-art methods.展开更多
The joint extraction of entities and their relations from certain texts plays a significant role in most natural language processes.For entity and relation extraction in a specific domain,we propose a hybrid neural fr...The joint extraction of entities and their relations from certain texts plays a significant role in most natural language processes.For entity and relation extraction in a specific domain,we propose a hybrid neural framework consisting of two parts:a span-based model and a graph-based model.The span-based model can tackle overlapping problems compared with BILOU methods,whereas the graph-based model treats relation prediction as graph classification.Our main contribution is to incorporate external lexical and syntactic knowledge of a specific domain,such as domain dictionaries and dependency structures from texts,into end-to-end neural models.We conducted extensive experiments on a Chinese military entity and relation extraction corpus.The results show that the proposed framework outperforms the baselines with better performance in terms of entity and relation prediction.The proposed method provides insight into problems with the joint extraction of entities and their relations.展开更多
Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this en...Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this end, we propose a training representation based on the dependency paths between entities in adependency tree which we call lexicalized dependency paths (LDPs). We showthat this representation is fast, efficient and transparent. We further propose representations utilizing entity types and its subtypes to refine our model and alleviatethe data sparsity problem. We apply lexicalized dependency paths to supervisedlearning using the ACE corpus and show that it can achieve similar performancelevel to other state-of-the-art methods and even surpass them on severalcategories.展开更多
Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this pape...Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this paper,we propose a novel neural network model for semantic relation classification called joint self-attention bi-LSTM(SA-Bi-LSTM)to model the internal structure of the sentence to obtain the importance of each word of the sentence without relying on additional information,and capture Long-distance dependence on semantics.We conduct experiments using the SemEval-2010 Task 8 dataset.Extensive experiments and the results demonstrated that the proposed method is effective against relation classification,which can obtain state-ofthe-art classification accuracy just with minimal feature engineering.展开更多
The traditional deep learning model has problems that the longdistance dependent information cannot be learned, and the correlation between the input and output of the model is not considered. And the information proc...The traditional deep learning model has problems that the longdistance dependent information cannot be learned, and the correlation between the input and output of the model is not considered. And the information processing on the sentence set is still insufficient. Aiming at the above problems, a relation extraction method combining bidirectional GRU network and multiattention mechanism is proposed. The word-level attention mechanism was used to extract the word-level features from the sentence, and the sentence-level attention mechanism was used to focus on the characteristics of sentence sets. The experimental verification in the NYT dataset was conducted. The experimental results show that the proposed method can effectively improve the F1 value of the relationship extraction.展开更多
Distant supervision has the ability to generate a huge amount training data.Recently,the multi-instance multi-label learning is imported to distant supervision to combat noisy data and improve the performance of relat...Distant supervision has the ability to generate a huge amount training data.Recently,the multi-instance multi-label learning is imported to distant supervision to combat noisy data and improve the performance of relation extraction.But multi-instance multi-label learning only uses hidden variables when inference relation between entities,which could not make full use of training data.Besides,traditional lexical and syntactic features are defective reflecting domain knowledge and global information of sentence,which limits the system’s performance.This paper presents a novel approach for multi-instance multilabel learning,which takes the idea of fuzzy classification.We use cluster center as train-data and in this way we can adequately utilize sentencelevel features.Meanwhile,we extend feature set by paragraph vector,which carries semantic information of sentences.We conduct an extensive empirical study to verify our contributions.The result shows our method is superior to the state-of-the-art distant supervised baseline.展开更多
Relative radiometric normalization (RRN) minimizes radiometric differences among images caused by inconsistencies of acquisition conditions rather than changes in surface. Scale invariant feature transform (SIFT) has ...Relative radiometric normalization (RRN) minimizes radiometric differences among images caused by inconsistencies of acquisition conditions rather than changes in surface. Scale invariant feature transform (SIFT) has the ability to automatically extract control points (CPs) and is commonly used for remote sensing images. However, its results are mostly inaccurate and sometimes contain incorrect matching caused by generating a small number of false CP pairs. These CP pairs have high false alarm matching. This paper presents a modified method to improve the performance of SIFT CPs matching by applying sum of absolute difference (SAD) in a different manner for the new optical satellite generation called near-equatorial orbit satellite and multi-sensor images. The proposed method, which has a significantly high rate of correct matches, improves CP matching. The data in this study were obtained from the RazakSAT satellite a new near equatorial satellite system. The proposed method involves six steps: 1) data reduction, 2) applying the SIFT to automatically extract CPs, 3) refining CPs matching by using SAD algorithm with empirical threshold, and 4) calculation of true CPs intensity values over all image’ bands, 5) preforming a linear regression model between the intensity values of CPs locate in reverence and sensed image’ bands, 6) Relative radiometric normalization conducting using regression transformation functions. Different thresholds have experimentally tested and used in conducting this study (50 and 70), by followed the proposed method, and it removed the false extracted SIFT CPs to be from 775, 1125, 883, 804, 883 and 681 false pairs to 342, 424, 547, 706, 547, and 469 corrected and matched pairs, respectively.展开更多
A new approach of relation extraction is described in this paper. It adopts a bootstrap- ping model with a novel iteration strategy, which generates more precise examples of specific relation. Compared with previous m...A new approach of relation extraction is described in this paper. It adopts a bootstrap- ping model with a novel iteration strategy, which generates more precise examples of specific relation. Compared with previous methods, the proposed method has three main advantages: first, it needs less manual intervention; second, more abundant and reasonable information are introduced to represent a relation pattern; third, it reduces the risk of circular dependency occurrence in bootstrapping. Scalable evaluation methodology and metrics are developed for our task with comparable techniques over TianWang 100G corpus. The experimental results show that it can get 90% precision and have excellent expansibility.展开更多
Currently, large amounts of information exist in Web sites and various digital media. Most of them are in natural lan-guage. They are easy to be browsed, but difficult to be understood by computer. Chunk parsing and e...Currently, large amounts of information exist in Web sites and various digital media. Most of them are in natural lan-guage. They are easy to be browsed, but difficult to be understood by computer. Chunk parsing and entity relation extracting is important work to understanding information semantic in natural language processing. Chunk analysis is a shallow parsing method, and entity relation extraction is used in establishing relationship between entities. Because full syntax parsing is complexity in Chinese text understanding, many researchers is more interesting in chunk analysis and relation extraction. Conditional random fields (CRFs) model is the valid probabilistic model to segment and label sequence data. This paper models chunk and entity relation problems in Chinese text. By transforming them into label solution we can use CRFs to realize the chunk analysis and entities relation extraction.展开更多
基金supported by the National Natural Science Foundation of China(Nos.U1804263,U1736214,62172435)the Zhongyuan Science and Technology Innovation Leading Talent Project(No.214200510019).
文摘The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of defining the semantic template of relation manually is particularly prominent in the extraction effect because it can obtain the deep semantic information of relation.However,this method has some problems,such as relying on expert experience and poor portability.Inspired by the rule-based entity relation extraction method,this paper proposes a joint entity relation extraction model based on a relation semantic template automatically constructed,which is abbreviated as RSTAC.This model refines the extraction rules of relation semantic templates from relation corpus through dependency parsing and realizes the automatic construction of relation semantic templates.Based on the relation semantic template,the process of relation classification and triplet extraction is constrained,and finally,the entity relation triplet is obtained.The experimental results on the three major Chinese datasets of DuIE,SanWen,and FinRE showthat the RSTAC model successfully obtains rich deep semantics of relation,improves the extraction effect of entity relation triples,and the F1 scores are increased by an average of 0.96% compared with classical joint extraction models such as CasRel,TPLinker,and RFBFN.
基金Science and Technology Innovation 2030-Major Project of“New Generation Artificial Intelligence”granted by Ministry of Science and Technology,Grant Number 2020AAA0109300.
文摘In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.
文摘Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.
基金supported by the Technology Projects of Guizhou Province under Grant[2024]003National Natural Science Foundation of China(GrantNos.62166007,62066008,62066007)Guizhou Provincial Science and Technology Projects under Grant No.ZK[2023]300.
文摘The relation is a semantic expression relevant to two named entities in a sentence.Since a sentence usually contains several named entities,it is essential to learn a structured sentence representation that encodes dependency information specific to the two named entities.In related work,graph convolutional neural networks are widely adopted to learn semantic dependencies,where a dependency tree initializes the adjacency matrix.However,this approach has two main issues.First,parsing a sentence heavily relies on external toolkits,which can be errorprone.Second,the dependency tree only encodes the syntactical structure of a sentence,which may not align with the relational semantic expression.In this paper,we propose an automatic graph learningmethod to autonomously learn a sentence’s structural information.Instead of using a fixed adjacency matrix initialized by a dependency tree,we introduce an Adaptive Adjacency Matrix to encode the semantic dependency between tokens.The elements of thismatrix are dynamically learned during the training process and optimized by task-relevant learning objectives,enabling the construction of task-relevant semantic dependencies within a sentence.Our model demonstrates superior performance on the TACRED and SemEval 2010 datasets,surpassing previous works by 1.3%and 0.8%,respectively.These experimental results show that our model excels in the relation extraction task,outperforming prior models.
基金supported by the National Key Research and Development Program[2020YFB1006302].
文摘An exhaustive study has been conducted to investigate span-based models for the joint entity and relation extraction task.However,these models sample a large number of negative entities and negative relations during the model training,which are essential but result in grossly imbalanced data distributions and in turn cause suboptimal model performance.In order to address the above issues,we propose a two-phase paradigm for the span-based joint entity and relation extraction,which involves classifying the entities and relations in the first phase,and predicting the types of these entities and relations in the second phase.The two-phase paradigm enables our model to significantly reduce the data distribution gap,including the gap between negative entities and other entities,aswell as the gap between negative relations and other relations.In addition,we make the first attempt at combining entity type and entity distance as global features,which has proven effective,especially for the relation extraction.Experimental results on several datasets demonstrate that the span-based joint extraction model augmented with the two-phase paradigm and the global features consistently outperforms previous state-ofthe-art span-based models for the joint extraction task,establishing a new standard benchmark.Qualitative and quantitative analyses further validate the effectiveness the proposed paradigm and the global features.
基金supported in part by the National Key Research and Development Program of China(2022ZD0116405)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA27030300)the Key Research Program of the Chinese Academy of Sciences(ZDBS-SSW-JSC006)。
文摘Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing noisy texts.Previous studies focus on the attention mechanism to construct the connection between different text features through semantic similarity.However,similarity-based methods cannot distinguish valid information from highly similar retrieved documents well.How to design an effective algorithm to implement aggregated reasoning in confusing information with similar features still remains an open issue.To address this problem,we design a novel local-toglobal causal reasoning(LGCR)network for cross-document RE,which enables efficient distinguishing,filtering and global reasoning on complex information from a causal perspective.Specifically,we propose a local causal estimation algorithm to estimate the causal effect,which is the first trial to use the causal reasoning independent of feature similarity to distinguish between confusing and valid information in cross-document RE.Furthermore,based on the causal effect,we propose a causality guided global reasoning algorithm to filter the confusing information and achieve global reasoning.Experimental results under the closed and the open settings of the large-scale dataset Cod RED demonstrate our LGCR network significantly outperforms the state-ofthe-art methods and validate the effectiveness of causal reasoning in confusing information processing.
基金supported by the National Natural Science Foundation of China (Nos.62177024,62007014)the Humanities and Social Sciences Youth Fund of the Ministry of Education (No.20YJC880024)+1 种基金China Post Doctoral Science Foundation (No.2019M652678)the Fundamental Research Funds for the Central Universities (No.CCNU20ZT019).
文摘A qualia role-based entity-dependency graph(EDG)is proposed to represent and extract quantity relations for solving algebra story problems stated in Chinese.Traditional neural solvers use end-to-end models to translate problem texts into math expressions,which lack quantity relation acquisition in sophisticated scenarios.To address the problem,the proposed method leverages EDG to represent quantity relations hidden in qualia roles of math objects.Algorithms were designed for EDG generation and quantity relation extraction for solving algebra story problems.Experimental result shows that the proposedmethod achieved an average accuracy of 82.2%on quantity relation extraction compared to 74.5%of baseline method.Another prompt learning result shows a 5%increase obtained in problem solving by injecting the extracted quantity relations into the baseline neural solvers.
文摘Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.
基金The animal protocols were approved by the Ethics Committee of the Second Affiliated Hospital of Harbin Medical University(SYDW2019-258).
文摘Background and objective:In northern China's cold regions,the prevalence of metabolic dysfunction-associated steatotic liver disease(MASLD)exceeds 50%,significantly higher than the national and global rates.MASLD is an important risk factor for cardiovascular and cerebrovascular diseases,including coronary heart disease,stroke,and tumors,with no specific therapeutic drugs currently available.The ethanol extract of cassia seed(CSEE)has shown promise in lowering blood lipids and improving hepatic steatosis,but its mechanism in treating MASLD remains underexplored.This study aims to investigate the therapeutic effects and mechanisms of CSEE.Methods:MASLD models were established in male Wistar rats and golden hamsters using a high fat diet(HFD).CSEE(10,50,250 mg/kg)was administered via gavage for six weeks.Serum levels of total cholesterol(TC),triglyceride(TG),low-density lipoprotein cholesterol(LDL-C),high-density lipoprotein cholesterol(HDL-C),aspartate aminotransferase(AST),and alanine aminotransferase(ALT),as well as liver TC and TG,were measured using biochemical kits.Histopathological changes in the liver were evaluated using Oil Red O staining,Hematoxylin-eosin(H&E)staining,and transmission electron microscopy(TEM).HepG2 cell viability was assessed using the cell counting kit-8(CCK8)and Calcein-AM/PI staining.Network pharmacology was used to analyze drug-disease targets,and western blotting was used to confirm these predictions.Results:CSEE treatment significantly reduced serum levels of TC,TG,LDL-C,ALT,and AST,and improved liver weight,liver index,and hepatic lipid deposition in rats and golden hamsters.In addition,CSEE alleviated free fatty acid(FFA)-induced lipid deposition in HepG2 cells.Molecular biology experiments demonstrated that CSEE increased the protein levels of p-AMPK,p-ACC,PPARα,CPT1A,PI3K P110 and p-AKT,while decreasing the protein levels of SREBP1,FASN,C/EBPα,and PPARγ,thus improving hepatic lipid metabolism and reducing lipid deposition.The beneficial effects of CSEE were reversed by small molecule inhibitors of the signaling pathways in vitro.Conclusion:CSEE improves liver lipid metabolism and reduces lipid droplet deposition in Wistar rats and golden hamsters with MASLD by activating hepatic AMPK,PPARα,and PI3K/AKT signaling pathways.
文摘Multimodal named entity recognition(MNER)and relation extraction(MRE)are key in social media analysis but face challenges like inefficient visual processing and non-optimal modality interaction.(1)Heavy visual embedding:the process of visual embedding is both time and computationally expensive due to the prerequisite extraction of explicit visual cues from the original image before input into the multimodal model.Consequently,these approaches cannot achieve efficient online reasoning;(2)suboptimal interaction handling:the prevalent method of managing interaction between different modalities typically relies on the alternation of self-attention and cross-attention mechanisms or excessive dependence on the gating mechanism.This explicit modeling method may fail to capture some nuanced relations between image and text,ultimately undermining the model’s capability to extract optimal information.To address these challenges,we introduce Implicit Modality Mining(IMM),a novel end-to-end framework for fine-grained image-text correlation without heavy visual embedders.IMM uses an Implicit Semantic Alignment module with a Transformer for cross-modal clues and an Insert-Activation module to effectively utilize these clues.Our approach achieves state-of-the-art performance on three datasets.
基金This research work is supported by the National Natural Science Foundation of China(NO.61772454,6171101570,61602059)Hunan Provincial Natural Science Foundation of China(No.2017JJ3334)+1 种基金the Research Foundation of Education Bureau of Hunan Province,China(No.16C0045)the Open Project Program of the National Laboratory of Pattern Recognition(NLPR).Professor Jin Wang is the corresponding author.
文摘Recently,many researchers have concentrated on using neural networks to learn features for Distant Supervised Relation Extraction(DSRE).These approaches generally use a softmax classifier with cross-entropy loss,which inevitably brings the noise of artificial class NA into classification process.To address the shortcoming,the classifier with ranking loss is employed to DSRE.Uniformly randomly selecting a relation or heuristically selecting the highest score among all incorrect relations are two common methods for generating a negative class in the ranking loss function.However,the majority of the generated negative class can be easily discriminated from positive class and will contribute little towards the training.Inspired by Generative Adversarial Networks(GANs),we use a neural network as the negative class generator to assist the training of our desired model,which acts as the discriminator in GANs.Through the alternating optimization of generator and discriminator,the generator is learning to produce more and more discriminable negative classes and the discriminator has to become better as well.This framework is independent of the concrete form of generator and discriminator.In this paper,we use a two layers fully-connected neural network as the generator and the Piecewise Convolutional Neural Networks(PCNNs)as the discriminator.Experiment results show that our proposed GAN-based method is effective and performs better than state-of-the-art methods.
基金supported by the Jiangsu Province“333”project BRA2020418the NSFC under Grant Number 71901215+2 种基金the National University of Defense Technology Research Project ZK20-46the Outstanding Young Talents Program of National University of Defense Technologythe National University of Defense Technology Youth Innovation Project。
文摘The joint extraction of entities and their relations from certain texts plays a significant role in most natural language processes.For entity and relation extraction in a specific domain,we propose a hybrid neural framework consisting of two parts:a span-based model and a graph-based model.The span-based model can tackle overlapping problems compared with BILOU methods,whereas the graph-based model treats relation prediction as graph classification.Our main contribution is to incorporate external lexical and syntactic knowledge of a specific domain,such as domain dictionaries and dependency structures from texts,into end-to-end neural models.We conducted extensive experiments on a Chinese military entity and relation extraction corpus.The results show that the proposed framework outperforms the baselines with better performance in terms of entity and relation prediction.The proposed method provides insight into problems with the joint extraction of entities and their relations.
文摘Log-linear models and more recently neural network models used forsupervised relation extraction requires substantial amounts of training data andtime, limiting the portability to new relations and domains. To this end, we propose a training representation based on the dependency paths between entities in adependency tree which we call lexicalized dependency paths (LDPs). We showthat this representation is fast, efficient and transparent. We further propose representations utilizing entity types and its subtypes to refine our model and alleviatethe data sparsity problem. We apply lexicalized dependency paths to supervisedlearning using the ACE corpus and show that it can achieve similar performancelevel to other state-of-the-art methods and even surpass them on severalcategories.
文摘Relation extraction is an important task in NLP community.However,some models often fail in capturing Long-distance dependence on semantics,and the interaction between semantics of two entities is ignored.In this paper,we propose a novel neural network model for semantic relation classification called joint self-attention bi-LSTM(SA-Bi-LSTM)to model the internal structure of the sentence to obtain the importance of each word of the sentence without relying on additional information,and capture Long-distance dependence on semantics.We conduct experiments using the SemEval-2010 Task 8 dataset.Extensive experiments and the results demonstrated that the proposed method is effective against relation classification,which can obtain state-ofthe-art classification accuracy just with minimal feature engineering.
文摘The traditional deep learning model has problems that the longdistance dependent information cannot be learned, and the correlation between the input and output of the model is not considered. And the information processing on the sentence set is still insufficient. Aiming at the above problems, a relation extraction method combining bidirectional GRU network and multiattention mechanism is proposed. The word-level attention mechanism was used to extract the word-level features from the sentence, and the sentence-level attention mechanism was used to focus on the characteristics of sentence sets. The experimental verification in the NYT dataset was conducted. The experimental results show that the proposed method can effectively improve the F1 value of the relationship extraction.
文摘Distant supervision has the ability to generate a huge amount training data.Recently,the multi-instance multi-label learning is imported to distant supervision to combat noisy data and improve the performance of relation extraction.But multi-instance multi-label learning only uses hidden variables when inference relation between entities,which could not make full use of training data.Besides,traditional lexical and syntactic features are defective reflecting domain knowledge and global information of sentence,which limits the system’s performance.This paper presents a novel approach for multi-instance multilabel learning,which takes the idea of fuzzy classification.We use cluster center as train-data and in this way we can adequately utilize sentencelevel features.Meanwhile,we extend feature set by paragraph vector,which carries semantic information of sentences.We conduct an extensive empirical study to verify our contributions.The result shows our method is superior to the state-of-the-art distant supervised baseline.
文摘Relative radiometric normalization (RRN) minimizes radiometric differences among images caused by inconsistencies of acquisition conditions rather than changes in surface. Scale invariant feature transform (SIFT) has the ability to automatically extract control points (CPs) and is commonly used for remote sensing images. However, its results are mostly inaccurate and sometimes contain incorrect matching caused by generating a small number of false CP pairs. These CP pairs have high false alarm matching. This paper presents a modified method to improve the performance of SIFT CPs matching by applying sum of absolute difference (SAD) in a different manner for the new optical satellite generation called near-equatorial orbit satellite and multi-sensor images. The proposed method, which has a significantly high rate of correct matches, improves CP matching. The data in this study were obtained from the RazakSAT satellite a new near equatorial satellite system. The proposed method involves six steps: 1) data reduction, 2) applying the SIFT to automatically extract CPs, 3) refining CPs matching by using SAD algorithm with empirical threshold, and 4) calculation of true CPs intensity values over all image’ bands, 5) preforming a linear regression model between the intensity values of CPs locate in reverence and sensed image’ bands, 6) Relative radiometric normalization conducting using regression transformation functions. Different thresholds have experimentally tested and used in conducting this study (50 and 70), by followed the proposed method, and it removed the false extracted SIFT CPs to be from 775, 1125, 883, 804, 883 and 681 false pairs to 342, 424, 547, 706, 547, and 469 corrected and matched pairs, respectively.
基金Supported by the National Natural Science Foundation of China (No.60503072, No.60575042 and No.60435020).
文摘A new approach of relation extraction is described in this paper. It adopts a bootstrap- ping model with a novel iteration strategy, which generates more precise examples of specific relation. Compared with previous methods, the proposed method has three main advantages: first, it needs less manual intervention; second, more abundant and reasonable information are introduced to represent a relation pattern; third, it reduces the risk of circular dependency occurrence in bootstrapping. Scalable evaluation methodology and metrics are developed for our task with comparable techniques over TianWang 100G corpus. The experimental results show that it can get 90% precision and have excellent expansibility.
文摘Currently, large amounts of information exist in Web sites and various digital media. Most of them are in natural lan-guage. They are easy to be browsed, but difficult to be understood by computer. Chunk parsing and entity relation extracting is important work to understanding information semantic in natural language processing. Chunk analysis is a shallow parsing method, and entity relation extraction is used in establishing relationship between entities. Because full syntax parsing is complexity in Chinese text understanding, many researchers is more interesting in chunk analysis and relation extraction. Conditional random fields (CRFs) model is the valid probabilistic model to segment and label sequence data. This paper models chunk and entity relation problems in Chinese text. By transforming them into label solution we can use CRFs to realize the chunk analysis and entities relation extraction.