Two phenomena of similar objects with different spectra and different objects with similar spectrum often result in the difficulty of separation and identification of all types of geographical objects only using spect...Two phenomena of similar objects with different spectra and different objects with similar spectrum often result in the difficulty of separation and identification of all types of geographical objects only using spectral information. Therefore, there is a need to incorporate spatial structural and spatial association properties of the surfaces of objects into image processing to improve the accuracy of classification of remotely sensed imagery. In the current article, a new method is proposed on the basis of the principle of multiple-point statistics for combining spectral information and spatial information for image classification. The method was validated by applying to a case study on road extraction based on Landsat TM taken over the Chinese Yellow River delta on August 8, 1999. The classification results have shown that this new method provides overall better results than the traditional methods such as maximum likelihood classifier (MLC).展开更多
Due to the need of rapid and sustainable development in China’s coastal zones, the high-resolution information theory using data mining technology becomes an urgent research focus. However, the traditional pixel-base...Due to the need of rapid and sustainable development in China’s coastal zones, the high-resolution information theory using data mining technology becomes an urgent research focus. However, the traditional pixel-based image analysis methods cannot meet the needs of this development trend. The paper attempts to present an information extraction approach in terms of image segmentation based on an object-oriented algorithm for high-resolution remote sensing images. An aim of the author’ research is to establish an identification system of "pixel-primitive-object". Through extraction and combination of micro-scale coastal zone features, some objects are classified or recognized, e.g., tidal flat, water line, sea wall, and mariculture pond. Firstly, the authors extract various internal features of relatively homogeneous primitive objects using an image segmentation algorithm based on both spectral and shape information. Secondly, the features of those primitives are analyzed to ascertain an optimal object by adopting certain feature rules. The results from this research indicate that our model is practical to realize and the extraction accuracy of the coastal information is significantly improved as compared with the traditional approaches. Therefore, this study provides a potential way to serve the author’ highly dynamic coastal zones for monitoring, management, development and utilization.展开更多
Analyzing Research and Development(R&D)trends is important because it can influence future decisions regarding R&D direction.In typical trend analysis,topic or technology taxonomies are employed to compute the...Analyzing Research and Development(R&D)trends is important because it can influence future decisions regarding R&D direction.In typical trend analysis,topic or technology taxonomies are employed to compute the popularities of the topics or codes over time.Although it is simple and effective,the taxonomies are difficult to manage because new technologies are introduced rapidly.Therefore,recent studies exploit deep learning to extract pre-defined targets such as problems and solutions.Based on the recent advances in question answering(QA)using deep learning,we adopt a multi-turn QA model to extract problems and solutions from Korean R&D reports.With the previous research,we use the reports directly and analyze the difficulties in handling them using QA style on Information Extraction(IE)for sentence-level benchmark dataset.After investigating the characteristics of Korean R&D,we propose a model to deal with multiple and repeated appearances of targets in the reports.Accordingly,we propose a model that includes an algorithm with two novel modules and a prompt.A newly proposed methodology focuses on reformulating a question without a static template or pre-defined knowledge.We show the effectiveness of the proposed model using a Korean R&D report dataset that we constructed and presented an in-depth analysis of the benefits of the multi-turn QA model.展开更多
Because of the developed economy and lush vegetation in southern China, the following obstacles or difficulties exist in remote sensing land surface classification: 1) Diverse surface composition types;2) Undulating t...Because of the developed economy and lush vegetation in southern China, the following obstacles or difficulties exist in remote sensing land surface classification: 1) Diverse surface composition types;2) Undulating terrains;3) Small fragmented land;4) Indistinguishable shadows of surface objects. It is our top priority to clarify how to use the concept of big data (Data mining technology) and various new technologies and methods to make complex surface remote sensing information extraction technology develop in the direction of automation, refinement and intelligence. In order to achieve the above research objectives, the paper takes the Gaofen-2 satellite data produced in China as the data source, and takes the complex surface remote sensing information extraction technology as the research object, and intelligently analyzes the remote sensing information of complex surface on the basis of completing the data collection and preprocessing. The specific extraction methods are as follows: 1) extraction research on fractal texture features of Brownian motion;2) extraction research on color features;3) extraction research on vegetation index;4) research on vectors and corresponding classification. In this paper, fractal texture features, color features, vegetation features and spectral features of remote sensing images are combined to form a combination feature vector, which improves the dimension of features, and the feature vector improves the difference of remote sensing features, and it is more conducive to the classification of remote sensing features, and thus it improves the classification accuracy of remote sensing images. It is suitable for remote sensing information extraction of complex surface in southern China. This method can be extended to complex surface area in the future.展开更多
Named entity recognition(NER)is a fundamental task of information extraction(IE),and it has attracted considerable research attention in recent years.The abundant annotated English NER datasets have significantly prom...Named entity recognition(NER)is a fundamental task of information extraction(IE),and it has attracted considerable research attention in recent years.The abundant annotated English NER datasets have significantly promoted the NER research in the English field.By contrast,much fewer efforts are made to the Chinese NER research,especially in the scientific domain,due to the scarcity of Chinese NER datasets.To alleviate this problem,we present aChinese scientificNER dataset–SciCN,which contains entity annotations of titles and abstracts derived from 3,500 scientific papers.We manually annotate a total of 62,059 entities,and these entities are classified into six types.Compared to English scientific NER datasets,SciCN has a larger scale and is more diverse,for it not only contains more paper abstracts but these abstracts are derived from more research fields.To investigate the properties of SciCN and provide baselines for future research,we adapt a number of previous state-of-theart Chinese NER models to evaluate SciCN.Experimental results show that SciCN is more challenging than other Chinese NER datasets.In addition,previous studies have proven the effectiveness of using lexicons to enhance Chinese NER models.Motivated by this fact,we provide a scientific domain-specific lexicon.Validation results demonstrate that our lexicon delivers better performance gains than lexicons of other domains.We hope that the SciCN dataset and the lexicon will enable us to benchmark the NER task regarding the Chinese scientific domain and make progress for future research.The dataset and lexicon are available at:https://github.com/yangjingla/SciCN.git.展开更多
The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of...The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of defining the semantic template of relation manually is particularly prominent in the extraction effect because it can obtain the deep semantic information of relation.However,this method has some problems,such as relying on expert experience and poor portability.Inspired by the rule-based entity relation extraction method,this paper proposes a joint entity relation extraction model based on a relation semantic template automatically constructed,which is abbreviated as RSTAC.This model refines the extraction rules of relation semantic templates from relation corpus through dependency parsing and realizes the automatic construction of relation semantic templates.Based on the relation semantic template,the process of relation classification and triplet extraction is constrained,and finally,the entity relation triplet is obtained.The experimental results on the three major Chinese datasets of DuIE,SanWen,and FinRE showthat the RSTAC model successfully obtains rich deep semantics of relation,improves the extraction effect of entity relation triples,and the F1 scores are increased by an average of 0.96% compared with classical joint extraction models such as CasRel,TPLinker,and RFBFN.展开更多
Web information extraction is viewed as a classification process and a competing classification method is presented to extract Web information directly through classification. Web fragments are represented with three ...Web information extraction is viewed as a classification process and a competing classification method is presented to extract Web information directly through classification. Web fragments are represented with three general features and the similarities between fragments are then defined on the bases of these features. Through competitions of fragments for different slots in information templates, the method classifies fragments into slot classes and filters out noise information. Far less annotated samples are needed as compared with rule-based methods and therefore it has a strong portability. Experiments show that the method has good performance and is superior to DOM-based method in information extraction. Key words information extraction - competing classification - feature extraction - wrapper induction CLC number TP 311 Foundation item: Supported by the National Natural Science Foundation of China (60303024)Biography: LI Xiang-yang (1974-), male, Ph. D. Candidate, research direction: information extraction, natural language processing.展开更多
In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is p...In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is proposed. The AGW algorithm utilizes comparing and correcting technique to generate the wrapper with the native characteristic of the HTT tree structure. The AGW algorithm can not only generate the wrapper automatically, but also rebuild the data schema easily and reduce the complexity of the computing.展开更多
Key information extraction can reduce the dimensional effects while evaluating the correct preferences of users during semantic data analysis.Currently,the classifiers are used to maximize the performance of web-page ...Key information extraction can reduce the dimensional effects while evaluating the correct preferences of users during semantic data analysis.Currently,the classifiers are used to maximize the performance of web-page recommendation in terms of precision and satisfaction.The recent method disambiguates contextual sentiment using conceptual prediction with robustness,however the conceptual prediction method is not able to yield the optimal solution.Context-dependent terms are primarily evaluated by constructing linear space of context features,presuming that if the terms come together in certain consumerrelated reviews,they are semantically reliant.Moreover,the more frequently they coexist,the greater the semantic dependency is.However,the influence of the terms that coexist with each other can be part of the frequency of the terms of their semantic dependence,as they are non-integrative and their individual meaning cannot be derived.In this work,we consider the strength of a term and the influence of a term as a combinatorial optimization,called Combinatorial Optimized Linear Space Knapsack for Information Retrieval(COLSK-IR).The COLSK-IR is considered as a knapsack problem with the total weight being the“term influence”or“influence of term”and the total value being the“term frequency”or“frequency of term”for semantic data analysis.The method,by which the term influence and the term frequency are considered to identify the optimal solutions,is called combinatorial optimizations.Thus,we choose the knapsack for performing an integer programming problem and perform multiple experiments using the linear space through combinatorial optimization to identify the possible optimum solutions.It is evident from our experimental results that the COLSK-IR provides better results than previous methods to detect strongly dependent snippets with minimum ambiguity that are related to inter-sentential context during semantic data analysis.展开更多
In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows...In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows model and tunnel model. The first four cumulants of these two exclusion models are computed in a single barrier structure, and their characteristics are obtained. A scheme with the help of the first three cumulants is devised to check a transport process to follow the counter-flows model, the tunnel model or neither of them. Time series generated by Monte Carlo techniques is adopted to validate the abstraction procedure, and the result is reasonable.展开更多
Due to higher demands on product diversity,flexible shift between productions of different products in one equipment becomes a popular solution,resulting in existence of multiple operation modes in a single process.In...Due to higher demands on product diversity,flexible shift between productions of different products in one equipment becomes a popular solution,resulting in existence of multiple operation modes in a single process.In order to handle such multi-mode process,a novel double-layer structure is proposed and the original data are decomposed into common and specific characteristics according to the relationship between variables among each mode.In addition,both low and high order information are considered in each layer.The common and specific information within each mode can be captured and separated into several subspaces according to the different order information.The performance of the proposed method is further validated through a numerical example and the Tennessee Eastman(TE)benchmark.Compared with previous methods,superiority of the proposed method is validated by the better monitoring results.展开更多
Traditional pattern representation in information extraction lack in the ability of representing domain-specific concepts and are therefore devoid of flexibility. To overcome these restrictions, an enhanced pattern re...Traditional pattern representation in information extraction lack in the ability of representing domain-specific concepts and are therefore devoid of flexibility. To overcome these restrictions, an enhanced pattern representation is designed which includes ontological concepts, neighboring-tree structures and soft constraints. An information-(extraction) inference engine based on hypothesis-generation and conflict-resolution is implemented. The proposed technique is successfully applied to an information extraction system for Chinese-language query front-end of a job-recruitment search engine.展开更多
Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant...Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant information has become a challenging task.Though existing deep learningbased techniques have been applied in smart agriculture for crop cultivation,crop disease detection,weed removal,and yield production,still it is difficult to find the semantics between extracted information due to unswerving effects of weather,soil,pest,and fertilizer data.This paper consists of two parts.An initial phase,which proposes a data preprocessing technique for removal of ambiguity in input corpora,and the second phase proposes a novel deep learning-based long short-term memory with rectification in Adam optimizer andmultilayer perceptron to find agricultural-based named entity recognition,events,and relations between them.The proposed algorithm has been trained and tested on four input corpora i.e.,agriculture,weather,soil,and pest&fertilizers.The experimental results have been compared with existing techniques and itwas observed that the proposed algorithm outperformsWeighted-SOM,LSTM+RAO,PLR-DBN,KNN,and Na飗e Bayes on standard parameters like accuracy,sensitivity,and specificity.展开更多
Satellite remote sensing data are usually used to analyze the spatial distribution pattern of geological structures and generally serve as a significant means for the identification of alteration zones. Based on the L...Satellite remote sensing data are usually used to analyze the spatial distribution pattern of geological structures and generally serve as a significant means for the identification of alteration zones. Based on the Landsat Enhanced Thematic Mapper (ETM+) data, which have better spectral resolution (8 bands) and spatial resolution (15 m in PAN band), the synthesis processing techniques were presented to fulfill alteration information extraction: data preparation, vegetation indices and band ratios, and expert classifier-based classification. These techniques have been implemented in the MapGIS-RSP software (version 1.0), developed by the Wuhan Zondy Cyber Technology Co., Ltd, China. In the study area application of extracting alteration information in the Zhaoyuan (招远) gold mines, Shandong (山东) Province, China, several hydorthermally altered zones (included two new sites) were found after satellite imagery interpretation coupled with field surveys. It is concluded that these synthesis processing techniques are useful approaches and are applicable to a wide range of gold-mineralized alteration information extraction.展开更多
Visual Information Extraction (VIE) is a technique that enables users to perform information extraction from visual documents driven by the visual appearance and the spatial relations occurring among the elements in t...Visual Information Extraction (VIE) is a technique that enables users to perform information extraction from visual documents driven by the visual appearance and the spatial relations occurring among the elements in the document. In particular, the extractions are expressed through a query language similar to the well known SQL. To further reduce the human effort in the extraction task, in this paper we present a fully formalized assistance mechanism that helps users in the interactive formulation of the queries.展开更多
Purpose:In order to annotate the semantic information and extract the research level information of research papers,we attempt to seek a method to develop an information extraction system.Design/methodology/approach:S...Purpose:In order to annotate the semantic information and extract the research level information of research papers,we attempt to seek a method to develop an information extraction system.Design/methodology/approach:Semantic dictionary and conditional random field model(CRFM)were used to annotate the semantic information of research papers.Based on the annotation results,the research level information was extracted through regular expression.All the functions were implemented on Sybase platform.Findings:According to the result of our experiment in carbon nanotube research,the precision and recall rates reached 65.13%and 57.75%,respectively after the semantic properties of word class have been labeled,and F-measure increased dramatically from less than 50%to60.18%while added with semantic features.Our experiment also showed that the information extraction system for research level(IESRL)can extract performance indicators from research papers rapidly and effectively.Research limitations:Some text information,such as that of format and chart,might have been lost due to the extraction processing of text format from PDF to TXT files.Semantic labeling on sentences could be insufficient due to the rich meaning of lexicons in the semantic dictionary.Research implications:The established system can help researchers rapidly compare the level of different research papers and find out their implicit innovation values.It could also be used as an auxiliary tool for analyzing research levels of various research institutions.Originality/value:In this work,we have successfully established an information extraction system for research papers by a revised semantic annotation method based on CRFM and the semantic dictionary.Our system can analyze the information extraction problem from two levels,i.e.from the sentence level and noun(phrase)level of research papers.Compared with the extraction method based on knowledge engineering and that on machine learning,our system shows advantages of the both.展开更多
Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word m...Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word most clearly expressing event occurrence.Thus,current approaches require both annotated triggers as well as event types in training data.Nevertheless,triggers are non-essential in ED,and it is time-wasting for annotators to identify the“most clearly”word from a sentence,particularly in longer sentences.To decrease manual effort,we evaluate event detectionwithout triggers.We propose a novel framework that combines Type-aware Attention and Graph Convolutional Networks(TA-GCN)for event detection.Specifically,the task is identified as a multi-label classification problem.We first encode the input sentence using a novel type-aware neural network with attention mechanisms.Then,a Graph Convolutional Networks(GCN)-based multilabel classification model is exploited for event detection.Experimental results demonstrate the effectiveness.展开更多
This paper focuses on term-status pair extraction from medical dialogues(MD-TSPE),which is essential in diagnosis dia-logue systems and the automatic scribe of electronic medical records(EMRs).In the past few years,wo...This paper focuses on term-status pair extraction from medical dialogues(MD-TSPE),which is essential in diagnosis dia-logue systems and the automatic scribe of electronic medical records(EMRs).In the past few years,works on MD-TSPE have attracted increasing research attention,especially after the remarkable progress made by generative methods.However,these generative methods output a whole sequence consisting of term-status pairs in one stage and ignore integrating prior knowledge,which demands a deeper un-derstanding to model the relationship between terms and infer the status of each term.This paper presents a knowledge-enhanced two-stage generative framework(KTGF)to address the above challenges.Using task-specific prompts,we employ a single model to com-plete the MD-TSPE through two phases in a unified generative form:We generate all terms the first and then generate the status of each generated term.In this way,the relationship between terms can be learned more effectively from the sequence containing only terms in the first phase,and our designed knowledge-enhanced prompt in the second phase can leverage the category and status candidates of the generated term for status generation.Furthermore,our proposed special status"not mentioned"makes more terms available and en-riches the training data in the second phase,which is critical in the low-resource setting.The experiments on the Chunyu and CMDD datasets show that the proposed method achieves superior results compared to the state-of-the-art models in the full training and low-re-sourcesettings.展开更多
Anomaly separation using geochemical data often involves operations in the frequency domain, such as filtering and reducing noise/signal ratios. Unfortunately, the abrupt edge truncation of an image along edges and ho...Anomaly separation using geochemical data often involves operations in the frequency domain, such as filtering and reducing noise/signal ratios. Unfortunately, the abrupt edge truncation of an image along edges and holes (with missing data) often causes frequency distribution distortion in the frequency domain. For example, bright strips are commonly seen in frequency distribution when using a Fourier transform. Such edge effect distortion may affect information extraction results; sometimes severely, depending on the edge abruptness of the image. Traditionally, edge effects are reduced by smoothing the image boundary prior to applying a Fourier transform. Zero-padding is one of the most commonly used smoothing methods. This simple method can reduce the edge effect to some degree but still distorts the image in some cases. Moreover, due to the complexity of geoscience images, which can include irregular shapes and holes with missing data, zero-padding does not always give satisfactory results. This paper proposes the use of decay functions to handle edge effects when extracting information from geoscience images. As an application, this method has been used in a newly developed multifractal method (S-A) for separating geochemical anomalies from background patterns. A geochemical dataset chosen from a mineral district in Nova Scotia, Canada was used to validate the method.展开更多
Objectives: This article presents a new computational procedure to discover scratches buried in the earth's crust. We also validate this new interdisciplinary analysis method with regional gravity data located in a ...Objectives: This article presents a new computational procedure to discover scratches buried in the earth's crust. We also validate this new interdisciplinary analysis method with regional gravity data located in a well-known Dabie orogenic zone for test. Methods: Based on the scratch analysis method evolved with mathematical morphology of surfaces, we present a procedure that extracts information of the crustal scratches from regional gravity data. Because the crustal scratches are positively and highly correlated to crustal deformation bands, it can be used for delineation of the crustal deformation belts. The scratches can be quantitatively characterized by calculation of the ridge coefficient function, whose high value traces delineate the deformation bands hidden in the regional gravity field. In addition, because the degree of crustal deformation is an important indicator of tectonic unit divisions, so the crust can be further divided according to the degree of crustal deformation into some tectonic units by using the ridge coefficient data, providing an objective base map for earth scientists to build tectonic models with quantitative evidence. Results: After the ridge coefficients are calculated, we can further enhance the boundary of high ridge-coefficient blocks, resulting in the so-called ridge-edge coefficient function. The high-value ridge-edge coefficients are well correlated with the edge faults of tectonic units underlay, providing accurate positioning of the base map for compilation of regional tectonic maps. In order to validate this new interdisciplinary analysis method, we select the Dabie orogenic zone as a pilot area for test, where rock outcrops are well exposed on the surface and detailed geological and geophysical surveys have been carried out. Tests show that the deformation bands and the tectonic units, which are conformed by tectonic scientists based on surface observations, are clearly displayed on the ridge and ridge-edge coefficient images obtained in this article. Moreover, these computer-generated images provide more accurate locations and geometric details. Conclusions: This work demonstrates that application of modern mathematical tools can promote the quantitative degree in research of modern geosciences, helping to open a door to develop a new branch of mathematical tectonics.展开更多
基金supported by the National Natural Science Foundation of China (No. 40671136)the National High Technology Research and Development Program of China (Nos.2006AA06Z115, 2006AA120106)
文摘Two phenomena of similar objects with different spectra and different objects with similar spectrum often result in the difficulty of separation and identification of all types of geographical objects only using spectral information. Therefore, there is a need to incorporate spatial structural and spatial association properties of the surfaces of objects into image processing to improve the accuracy of classification of remotely sensed imagery. In the current article, a new method is proposed on the basis of the principle of multiple-point statistics for combining spectral information and spatial information for image classification. The method was validated by applying to a case study on road extraction based on Landsat TM taken over the Chinese Yellow River delta on August 8, 1999. The classification results have shown that this new method provides overall better results than the traditional methods such as maximum likelihood classifier (MLC).
基金The "973" Project of China under contract No 2006CB701305the "863" Project of China under contract No2009AA12Z148the National Natural Science Foundation of China under contract No 40971224
文摘Due to the need of rapid and sustainable development in China’s coastal zones, the high-resolution information theory using data mining technology becomes an urgent research focus. However, the traditional pixel-based image analysis methods cannot meet the needs of this development trend. The paper attempts to present an information extraction approach in terms of image segmentation based on an object-oriented algorithm for high-resolution remote sensing images. An aim of the author’ research is to establish an identification system of "pixel-primitive-object". Through extraction and combination of micro-scale coastal zone features, some objects are classified or recognized, e.g., tidal flat, water line, sea wall, and mariculture pond. Firstly, the authors extract various internal features of relatively homogeneous primitive objects using an image segmentation algorithm based on both spectral and shape information. Secondly, the features of those primitives are analyzed to ascertain an optimal object by adopting certain feature rules. The results from this research indicate that our model is practical to realize and the extraction accuracy of the coastal information is significantly improved as compared with the traditional approaches. Therefore, this study provides a potential way to serve the author’ highly dynamic coastal zones for monitoring, management, development and utilization.
基金the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(NRF-2019R1G1A1003312)the Ministry of Education(NRF-2021R1I1A3052815).
文摘Analyzing Research and Development(R&D)trends is important because it can influence future decisions regarding R&D direction.In typical trend analysis,topic or technology taxonomies are employed to compute the popularities of the topics or codes over time.Although it is simple and effective,the taxonomies are difficult to manage because new technologies are introduced rapidly.Therefore,recent studies exploit deep learning to extract pre-defined targets such as problems and solutions.Based on the recent advances in question answering(QA)using deep learning,we adopt a multi-turn QA model to extract problems and solutions from Korean R&D reports.With the previous research,we use the reports directly and analyze the difficulties in handling them using QA style on Information Extraction(IE)for sentence-level benchmark dataset.After investigating the characteristics of Korean R&D,we propose a model to deal with multiple and repeated appearances of targets in the reports.Accordingly,we propose a model that includes an algorithm with two novel modules and a prompt.A newly proposed methodology focuses on reformulating a question without a static template or pre-defined knowledge.We show the effectiveness of the proposed model using a Korean R&D report dataset that we constructed and presented an in-depth analysis of the benefits of the multi-turn QA model.
文摘Because of the developed economy and lush vegetation in southern China, the following obstacles or difficulties exist in remote sensing land surface classification: 1) Diverse surface composition types;2) Undulating terrains;3) Small fragmented land;4) Indistinguishable shadows of surface objects. It is our top priority to clarify how to use the concept of big data (Data mining technology) and various new technologies and methods to make complex surface remote sensing information extraction technology develop in the direction of automation, refinement and intelligence. In order to achieve the above research objectives, the paper takes the Gaofen-2 satellite data produced in China as the data source, and takes the complex surface remote sensing information extraction technology as the research object, and intelligently analyzes the remote sensing information of complex surface on the basis of completing the data collection and preprocessing. The specific extraction methods are as follows: 1) extraction research on fractal texture features of Brownian motion;2) extraction research on color features;3) extraction research on vegetation index;4) research on vectors and corresponding classification. In this paper, fractal texture features, color features, vegetation features and spectral features of remote sensing images are combined to form a combination feature vector, which improves the dimension of features, and the feature vector improves the difference of remote sensing features, and it is more conducive to the classification of remote sensing features, and thus it improves the classification accuracy of remote sensing images. It is suitable for remote sensing information extraction of complex surface in southern China. This method can be extended to complex surface area in the future.
基金This research was supported by the National Key Research and Development Program[2020YFB1006302].
文摘Named entity recognition(NER)is a fundamental task of information extraction(IE),and it has attracted considerable research attention in recent years.The abundant annotated English NER datasets have significantly promoted the NER research in the English field.By contrast,much fewer efforts are made to the Chinese NER research,especially in the scientific domain,due to the scarcity of Chinese NER datasets.To alleviate this problem,we present aChinese scientificNER dataset–SciCN,which contains entity annotations of titles and abstracts derived from 3,500 scientific papers.We manually annotate a total of 62,059 entities,and these entities are classified into six types.Compared to English scientific NER datasets,SciCN has a larger scale and is more diverse,for it not only contains more paper abstracts but these abstracts are derived from more research fields.To investigate the properties of SciCN and provide baselines for future research,we adapt a number of previous state-of-theart Chinese NER models to evaluate SciCN.Experimental results show that SciCN is more challenging than other Chinese NER datasets.In addition,previous studies have proven the effectiveness of using lexicons to enhance Chinese NER models.Motivated by this fact,we provide a scientific domain-specific lexicon.Validation results demonstrate that our lexicon delivers better performance gains than lexicons of other domains.We hope that the SciCN dataset and the lexicon will enable us to benchmark the NER task regarding the Chinese scientific domain and make progress for future research.The dataset and lexicon are available at:https://github.com/yangjingla/SciCN.git.
基金supported by the National Natural Science Foundation of China(Nos.U1804263,U1736214,62172435)the Zhongyuan Science and Technology Innovation Leading Talent Project(No.214200510019).
文摘The joint entity relation extraction model which integrates the semantic information of relation is favored by relevant researchers because of its effectiveness in solving the overlapping of entities,and the method of defining the semantic template of relation manually is particularly prominent in the extraction effect because it can obtain the deep semantic information of relation.However,this method has some problems,such as relying on expert experience and poor portability.Inspired by the rule-based entity relation extraction method,this paper proposes a joint entity relation extraction model based on a relation semantic template automatically constructed,which is abbreviated as RSTAC.This model refines the extraction rules of relation semantic templates from relation corpus through dependency parsing and realizes the automatic construction of relation semantic templates.Based on the relation semantic template,the process of relation classification and triplet extraction is constrained,and finally,the entity relation triplet is obtained.The experimental results on the three major Chinese datasets of DuIE,SanWen,and FinRE showthat the RSTAC model successfully obtains rich deep semantics of relation,improves the extraction effect of entity relation triples,and the F1 scores are increased by an average of 0.96% compared with classical joint extraction models such as CasRel,TPLinker,and RFBFN.
文摘Web information extraction is viewed as a classification process and a competing classification method is presented to extract Web information directly through classification. Web fragments are represented with three general features and the similarities between fragments are then defined on the bases of these features. Through competitions of fragments for different slots in information templates, the method classifies fragments into slot classes and filters out noise information. Far less annotated samples are needed as compared with rule-based methods and therefore it has a strong portability. Experiments show that the method has good performance and is superior to DOM-based method in information extraction. Key words information extraction - competing classification - feature extraction - wrapper induction CLC number TP 311 Foundation item: Supported by the National Natural Science Foundation of China (60303024)Biography: LI Xiang-yang (1974-), male, Ph. D. Candidate, research direction: information extraction, natural language processing.
基金the National Grand Fundamental Research 973 Program of China(G1998030414)
文摘In order to use data information in the Internet, it is necessary to extract data from web pages. An HTT tree model representing HTML pages is presented. Based on the HTT model, a wrapper generation algorithm AGW is proposed. The AGW algorithm utilizes comparing and correcting technique to generate the wrapper with the native characteristic of the HTT tree structure. The AGW algorithm can not only generate the wrapper automatically, but also rebuild the data schema easily and reduce the complexity of the computing.
文摘Key information extraction can reduce the dimensional effects while evaluating the correct preferences of users during semantic data analysis.Currently,the classifiers are used to maximize the performance of web-page recommendation in terms of precision and satisfaction.The recent method disambiguates contextual sentiment using conceptual prediction with robustness,however the conceptual prediction method is not able to yield the optimal solution.Context-dependent terms are primarily evaluated by constructing linear space of context features,presuming that if the terms come together in certain consumerrelated reviews,they are semantically reliant.Moreover,the more frequently they coexist,the greater the semantic dependency is.However,the influence of the terms that coexist with each other can be part of the frequency of the terms of their semantic dependence,as they are non-integrative and their individual meaning cannot be derived.In this work,we consider the strength of a term and the influence of a term as a combinatorial optimization,called Combinatorial Optimized Linear Space Knapsack for Information Retrieval(COLSK-IR).The COLSK-IR is considered as a knapsack problem with the total weight being the“term influence”or“influence of term”and the total value being the“term frequency”or“frequency of term”for semantic data analysis.The method,by which the term influence and the term frequency are considered to identify the optimal solutions,is called combinatorial optimizations.Thus,we choose the knapsack for performing an integer programming problem and perform multiple experiments using the linear space through combinatorial optimization to identify the possible optimum solutions.It is evident from our experimental results that the COLSK-IR provides better results than previous methods to detect strongly dependent snippets with minimum ambiguity that are related to inter-sentential context during semantic data analysis.
基金Project supported by the National Natural Science Foundation of China (Grant No. 60676053)Applied Material in Xi’an Innovation Funds,China (Grant No. XA-AM-200603)
文摘In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows model and tunnel model. The first four cumulants of these two exclusion models are computed in a single barrier structure, and their characteristics are obtained. A scheme with the help of the first three cumulants is devised to check a transport process to follow the counter-flows model, the tunnel model or neither of them. Time series generated by Monte Carlo techniques is adopted to validate the abstraction procedure, and the result is reasonable.
基金the National Natural Science Foundation of China(61903352)China Postdoctoral Science Foundation(2020M671721)+4 种基金Zhejiang Province Natural Science Foundation of China(LQ19F030007)Natural Science Foundation of Jiangsu Province(BK20180594)Project of department of education of Zhejiang province(Y202044960)Project of Zhejiang Tongji Vocational College of Science and Technology(TRC1904)Foundation of Key Laboratory of Advanced Process Control for Light Industry(Jiangnan University),Ministry of Education,P.R.China,APCLI1803.
文摘Due to higher demands on product diversity,flexible shift between productions of different products in one equipment becomes a popular solution,resulting in existence of multiple operation modes in a single process.In order to handle such multi-mode process,a novel double-layer structure is proposed and the original data are decomposed into common and specific characteristics according to the relationship between variables among each mode.In addition,both low and high order information are considered in each layer.The common and specific information within each mode can be captured and separated into several subspaces according to the different order information.The performance of the proposed method is further validated through a numerical example and the Tennessee Eastman(TE)benchmark.Compared with previous methods,superiority of the proposed method is validated by the better monitoring results.
文摘Traditional pattern representation in information extraction lack in the ability of representing domain-specific concepts and are therefore devoid of flexibility. To overcome these restrictions, an enhanced pattern representation is designed which includes ontological concepts, neighboring-tree structures and soft constraints. An information-(extraction) inference engine based on hypothesis-generation and conflict-resolution is implemented. The proposed technique is successfully applied to an information extraction system for Chinese-language query front-end of a job-recruitment search engine.
基金This work was supported by the Deanship of Scientific Research at King Khalid University through a General Research Project under Grant Number GRP/41/42.
文摘Information extraction plays a vital role in natural language processing,to extract named entities and events from unstructured data.Due to the exponential data growth in the agricultural sector,extracting significant information has become a challenging task.Though existing deep learningbased techniques have been applied in smart agriculture for crop cultivation,crop disease detection,weed removal,and yield production,still it is difficult to find the semantics between extracted information due to unswerving effects of weather,soil,pest,and fertilizer data.This paper consists of two parts.An initial phase,which proposes a data preprocessing technique for removal of ambiguity in input corpora,and the second phase proposes a novel deep learning-based long short-term memory with rectification in Adam optimizer andmultilayer perceptron to find agricultural-based named entity recognition,events,and relations between them.The proposed algorithm has been trained and tested on four input corpora i.e.,agriculture,weather,soil,and pest&fertilizers.The experimental results have been compared with existing techniques and itwas observed that the proposed algorithm outperformsWeighted-SOM,LSTM+RAO,PLR-DBN,KNN,and Na飗e Bayes on standard parameters like accuracy,sensitivity,and specificity.
基金The paper is supported by the Research Foundation for Out-standing Young Teachers, China University of Geosciences (Wuhan) (Nos. CUGQNL0628, CUGQNL0640)the National High-Tech Research and Development Program (863 Program) (No. 2001AA135170)the Postdoctoral Foundation of the Shandong Zhaojin Group Co. (No. 20050262120)
文摘Satellite remote sensing data are usually used to analyze the spatial distribution pattern of geological structures and generally serve as a significant means for the identification of alteration zones. Based on the Landsat Enhanced Thematic Mapper (ETM+) data, which have better spectral resolution (8 bands) and spatial resolution (15 m in PAN band), the synthesis processing techniques were presented to fulfill alteration information extraction: data preparation, vegetation indices and band ratios, and expert classifier-based classification. These techniques have been implemented in the MapGIS-RSP software (version 1.0), developed by the Wuhan Zondy Cyber Technology Co., Ltd, China. In the study area application of extracting alteration information in the Zhaoyuan (招远) gold mines, Shandong (山东) Province, China, several hydorthermally altered zones (included two new sites) were found after satellite imagery interpretation coupled with field surveys. It is concluded that these synthesis processing techniques are useful approaches and are applicable to a wide range of gold-mineralized alteration information extraction.
文摘Visual Information Extraction (VIE) is a technique that enables users to perform information extraction from visual documents driven by the visual appearance and the spatial relations occurring among the elements in the document. In particular, the extractions are expressed through a query language similar to the well known SQL. To further reduce the human effort in the extraction task, in this paper we present a fully formalized assistance mechanism that helps users in the interactive formulation of the queries.
基金supported by the National Social Science Foundation of China(Grant No.12CTQ032)
文摘Purpose:In order to annotate the semantic information and extract the research level information of research papers,we attempt to seek a method to develop an information extraction system.Design/methodology/approach:Semantic dictionary and conditional random field model(CRFM)were used to annotate the semantic information of research papers.Based on the annotation results,the research level information was extracted through regular expression.All the functions were implemented on Sybase platform.Findings:According to the result of our experiment in carbon nanotube research,the precision and recall rates reached 65.13%and 57.75%,respectively after the semantic properties of word class have been labeled,and F-measure increased dramatically from less than 50%to60.18%while added with semantic features.Our experiment also showed that the information extraction system for research level(IESRL)can extract performance indicators from research papers rapidly and effectively.Research limitations:Some text information,such as that of format and chart,might have been lost due to the extraction processing of text format from PDF to TXT files.Semantic labeling on sentences could be insufficient due to the rich meaning of lexicons in the semantic dictionary.Research implications:The established system can help researchers rapidly compare the level of different research papers and find out their implicit innovation values.It could also be used as an auxiliary tool for analyzing research levels of various research institutions.Originality/value:In this work,we have successfully established an information extraction system for research papers by a revised semantic annotation method based on CRFM and the semantic dictionary.Our system can analyze the information extraction problem from two levels,i.e.from the sentence level and noun(phrase)level of research papers.Compared with the extraction method based on knowledge engineering and that on machine learning,our system shows advantages of the both.
基金supported by the Hunan Provincial Natural Science Foundation of China(Grant No.2020JJ4624)the National Social Science Fund of China(Grant No.20&ZD047)+1 种基金the Scientific Research Fund of Hunan Provincial Education Department(Grant No.19A020)the National University of Defense Technology Research Project ZK20-46 and the Young Elite Scientists Sponsorship Program 2021-JCJQ-QT-050.
文摘Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word most clearly expressing event occurrence.Thus,current approaches require both annotated triggers as well as event types in training data.Nevertheless,triggers are non-essential in ED,and it is time-wasting for annotators to identify the“most clearly”word from a sentence,particularly in longer sentences.To decrease manual effort,we evaluate event detectionwithout triggers.We propose a novel framework that combines Type-aware Attention and Graph Convolutional Networks(TA-GCN)for event detection.Specifically,the task is identified as a multi-label classification problem.We first encode the input sentence using a novel type-aware neural network with attention mechanisms.Then,a Graph Convolutional Networks(GCN)-based multilabel classification model is exploited for event detection.Experimental results demonstrate the effectiveness.
基金This work was supported by the Key Research Program of the Chinese Academy of Sciences(No.ZDBSSSW-JSC006)the National Natural Science Foundation of China(No.62206294).
文摘This paper focuses on term-status pair extraction from medical dialogues(MD-TSPE),which is essential in diagnosis dia-logue systems and the automatic scribe of electronic medical records(EMRs).In the past few years,works on MD-TSPE have attracted increasing research attention,especially after the remarkable progress made by generative methods.However,these generative methods output a whole sequence consisting of term-status pairs in one stage and ignore integrating prior knowledge,which demands a deeper un-derstanding to model the relationship between terms and infer the status of each term.This paper presents a knowledge-enhanced two-stage generative framework(KTGF)to address the above challenges.Using task-specific prompts,we employ a single model to com-plete the MD-TSPE through two phases in a unified generative form:We generate all terms the first and then generate the status of each generated term.In this way,the relationship between terms can be learned more effectively from the sequence containing only terms in the first phase,and our designed knowledge-enhanced prompt in the second phase can leverage the category and status candidates of the generated term for status generation.Furthermore,our proposed special status"not mentioned"makes more terms available and en-riches the training data in the second phase,which is critical in the low-resource setting.The experiments on the Chunyu and CMDD datasets show that the proposed method achieves superior results compared to the state-of-the-art models in the full training and low-re-sourcesettings.
文摘Anomaly separation using geochemical data often involves operations in the frequency domain, such as filtering and reducing noise/signal ratios. Unfortunately, the abrupt edge truncation of an image along edges and holes (with missing data) often causes frequency distribution distortion in the frequency domain. For example, bright strips are commonly seen in frequency distribution when using a Fourier transform. Such edge effect distortion may affect information extraction results; sometimes severely, depending on the edge abruptness of the image. Traditionally, edge effects are reduced by smoothing the image boundary prior to applying a Fourier transform. Zero-padding is one of the most commonly used smoothing methods. This simple method can reduce the edge effect to some degree but still distorts the image in some cases. Moreover, due to the complexity of geoscience images, which can include irregular shapes and holes with missing data, zero-padding does not always give satisfactory results. This paper proposes the use of decay functions to handle edge effects when extracting information from geoscience images. As an application, this method has been used in a newly developed multifractal method (S-A) for separating geochemical anomalies from background patterns. A geochemical dataset chosen from a mineral district in Nova Scotia, Canada was used to validate the method.
基金National Science Foundation and Chinese Geological Survey for supporting this work
文摘Objectives: This article presents a new computational procedure to discover scratches buried in the earth's crust. We also validate this new interdisciplinary analysis method with regional gravity data located in a well-known Dabie orogenic zone for test. Methods: Based on the scratch analysis method evolved with mathematical morphology of surfaces, we present a procedure that extracts information of the crustal scratches from regional gravity data. Because the crustal scratches are positively and highly correlated to crustal deformation bands, it can be used for delineation of the crustal deformation belts. The scratches can be quantitatively characterized by calculation of the ridge coefficient function, whose high value traces delineate the deformation bands hidden in the regional gravity field. In addition, because the degree of crustal deformation is an important indicator of tectonic unit divisions, so the crust can be further divided according to the degree of crustal deformation into some tectonic units by using the ridge coefficient data, providing an objective base map for earth scientists to build tectonic models with quantitative evidence. Results: After the ridge coefficients are calculated, we can further enhance the boundary of high ridge-coefficient blocks, resulting in the so-called ridge-edge coefficient function. The high-value ridge-edge coefficients are well correlated with the edge faults of tectonic units underlay, providing accurate positioning of the base map for compilation of regional tectonic maps. In order to validate this new interdisciplinary analysis method, we select the Dabie orogenic zone as a pilot area for test, where rock outcrops are well exposed on the surface and detailed geological and geophysical surveys have been carried out. Tests show that the deformation bands and the tectonic units, which are conformed by tectonic scientists based on surface observations, are clearly displayed on the ridge and ridge-edge coefficient images obtained in this article. Moreover, these computer-generated images provide more accurate locations and geometric details. Conclusions: This work demonstrates that application of modern mathematical tools can promote the quantitative degree in research of modern geosciences, helping to open a door to develop a new branch of mathematical tectonics.