In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling...In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling, combining semantic web technologies, was proposed. A general configuration ontology was developed to provide a common concept structure for modeling configuration knowledge and rules of specific product domains. The OWL web ontology language and semantic web rule language (SWRL) were used to formally represent the configuration ontology, domain configuration knowledge and rules to enhance the consistency, maintainability and reusability of all the configuration knowledge. The configuration knowledge modeling of a customizable personal computer family shows that the approach can provide explicit, computerunderstandable knowledge semantics for specific product configuration domains and can efficiently support automatic configuration tasks of complex products.展开更多
To deal with a lack of semantic interoperability of traditional knowledge retrieval approaches, a semantic-based networked manufacturing (NM) knowledge retrieval architecture is proposed, which offers a series of to...To deal with a lack of semantic interoperability of traditional knowledge retrieval approaches, a semantic-based networked manufacturing (NM) knowledge retrieval architecture is proposed, which offers a series of tools for supporting the sharing of knowledge and promoting NM collaboration. A 5-tuple based semantic information retrieval model is proposed, which includes the interoperation on the semantic layer, and a test process is given for this model. The recall ratio and the precision ratio of manufacturing knowledge retrieval are proved to be greatly improved by evaluation. Thus, a practical and reliable approach based on the semantic web is provided for solving the correlated concrete problems in regional networked manufacturing.展开更多
The process inference cannot be achieved effectively by the traditional expert system,while the ontology and semantic technology could provide better solution to the knowledge acquisition and intelligent inference of ...The process inference cannot be achieved effectively by the traditional expert system,while the ontology and semantic technology could provide better solution to the knowledge acquisition and intelligent inference of expert system.The application mode of ontology and semantic technology on the process parameters recommendation are mainly investigated.Firstly,the content about ontology,semantic web rule language(SWRL)rules and the relative inference engine are introduced.Then,the inference method about process based on ontology technology and the SWRL rule is proposed.The construction method of process ontology base and the writing criterion of SWRL rule are described later.Finally,the results of inference are obtained.The mode raised could offer the reference to the construction of process knowledge base as well as the expert system's reusable process rule library.展开更多
Computational techniques have been adopted in medi-cal and biological systems for a long time. There is no doubt that the development and application of computational methods will render great help in better understan...Computational techniques have been adopted in medi-cal and biological systems for a long time. There is no doubt that the development and application of computational methods will render great help in better understanding biomedical and biological functions. Large amounts of datasets have been produced by biomedical and biological experiments and simulations. In order for researchers to gain knowledge from origi- nal data, nontrivial transformation is necessary, which is regarded as a critical link in the chain of knowledge acquisition, sharing, and reuse. Challenges that have been encountered include: how to efficiently and effectively represent human knowledge in formal computing models, how to take advantage of semantic text mining techniques rather than traditional syntactic text mining, and how to handle security issues during the knowledge sharing and reuse. This paper summarizes the state-of-the-art in these research directions. We aim to provide readers with an introduction of major computing themes to be applied to the medical and biological research.展开更多
This paper focuses on semantic knowl- edge acquisition from blogs with the proposed tag- topic model. The model extends the Latent Dirichlet Allocation (LDA) model by adding a tag layer be- tween the document and th...This paper focuses on semantic knowl- edge acquisition from blogs with the proposed tag- topic model. The model extends the Latent Dirichlet Allocation (LDA) model by adding a tag layer be- tween the document and the topic. Each document is represented by a mixture of tags; each tag is as- sociated with a multinomial distribution over topics and each topic is associated with a multinomial dis- trNution over words. After parameter estimation, the tags are used to descrNe the underlying topics. Thus the latent semantic knowledge within the top- ics could be represented explicitly. The tags are treated as concepts, and the top-N words from the top topics are selected as related words of the con- cepts. Then PMI-IR is employed to compute the re- latedness between each tag-word pair and noisy words with low correlation removed to improve the quality of the semantic knowledge. Experiment re- sults show that the proposed method can effectively capture semantic knowledge, especially the polyse- me and synonym.展开更多
Mining penetration testing semantic knowledge hidden in vast amounts of raw penetration testing data is of vital importance for automated penetration testing.Associative rule mining,a data mining technique,has been st...Mining penetration testing semantic knowledge hidden in vast amounts of raw penetration testing data is of vital importance for automated penetration testing.Associative rule mining,a data mining technique,has been studied and explored for a long time.However,few studies have focused on knowledge discovery in the penetration testing area.The experimental result reveals that the long-tail distribution of penetration testing data nullifies the effectiveness of associative rule mining algorithms that are based on frequent pattern.To address this problem,a Bayesian inference based penetration semantic knowledge mining algorithm is proposed.First,a directed bipartite graph model,a kind of Bayesian network,is constructed to formalize penetration testing data.Then,we adopt the maximum likelihood estimate method to optimize the model parameters and decompose a large Bayesian network into smaller networks based on conditional independence of variables for improved solution efficiency.Finally,irrelevant variable elimination is adopted to extract penetration semantic knowledge from the conditional probability distribution of the model.The experimental results show that the proposed method can discover penetration semantic knowledge from raw penetration testing data effectively and efficiently.展开更多
The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly ...The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly describe the evolution of the LOD, the emerging world-wide semantic web (WWSW), and explore the scalability and performance features Of the service oriented architecture that forms the foundation of the semantic technology platform developed at MIMOS Bhd., for addressing the challenges posed by the intelligent future internet. This paper" concludes with a review of the current status of the agriculture linked open data.展开更多
The drastic growth of coastal observation sensors results in copious data that provide weather information.The intricacies in sensor-generated big data are heterogeneity and interpretation,driving high-end Information...The drastic growth of coastal observation sensors results in copious data that provide weather information.The intricacies in sensor-generated big data are heterogeneity and interpretation,driving high-end Information Retrieval(IR)systems.The Semantic Web(SW)can solve this issue by integrating data into a single platform for information exchange and knowledge retrieval.This paper focuses on exploiting the SWbase systemto provide interoperability through ontologies by combining the data concepts with ontology classes.This paper presents a 4-phase weather data model:data processing,ontology creation,SW processing,and query engine.The developed Oceanographic Weather Ontology helps to enhance data analysis,discovery,IR,and decision making.In addition to that,it also evaluates the developed ontology with other state-of-the-art ontologies.The proposed ontology’s quality has improved by 39.28%in terms of completeness,and structural complexity has decreased by 45.29%,11%and 37.7%in Precision and Accuracy.Indian Meteorological Satellite INSAT-3D’s ocean data is a typical example of testing the proposed model.The experimental result shows the effectiveness of the proposed data model and its advantages in machine understanding and IR.展开更多
Abstract: It was discussed that the way to reflect the internal relations between judgment and identification, the two most fundamental ways of thinking or cognition operations, during the course of the semantic netw...Abstract: It was discussed that the way to reflect the internal relations between judgment and identification, the two most fundamental ways of thinking or cognition operations, during the course of the semantic network knowledge representation processing. A new extended Petri net is defined based on qualitative mapping, which strengths the expressive ability of the feature of thinking and the mode of action of brain. A model of semantic network knowledge representation based on new Petri net is given. Semantic network knowledge has a more efficient representation and reasoning mechanism. This model not only can reflect the characteristics of associative memory in semantic network knowledge representation, but also can use Petri net to express the criterion changes and its change law of recognition judgment, especially the cognitive operation of thinking based on extraction and integration of sensory characteristics to well express the thinking transition course from quantitative change to qualitative change of human cognition.展开更多
This paper proposes a method to construct conceptual semantic knowledge base of software engineering domain based on Wikipedia. First, it takes the concept of SWEBOK V3 as the standard to extract the interpretation of...This paper proposes a method to construct conceptual semantic knowledge base of software engineering domain based on Wikipedia. First, it takes the concept of SWEBOK V3 as the standard to extract the interpretation of the concept from the Wikipedia, and extracts the keywords as the concept of semantic;Second, through the conceptual semantic knowledge base, it is formed by the relationship between the hierarchical relationship concept and the other text interpretation concept in the Wikipedia. Finally, the semantic similarity between concepts is calculated by the random walk algorithm for the construction of the conceptual semantic knowledge base. The semantic similarity of knowledge base constructed by this method can reach more than 84%, and the effectiveness of the proposed method is verified.展开更多
The paper presents an extension multi-laye r p erceptron model that is capable of representing and reasoning propositional know ledge base. An extended version of propositional calculus is developed, and its some prop...The paper presents an extension multi-laye r p erceptron model that is capable of representing and reasoning propositional know ledge base. An extended version of propositional calculus is developed, and its some properties is discussed. Formulas of the extended calculus can be expressed in the extension multi-layer perceptron. Naturally, semantic deduction of prop ositional knowledge base can be implement by the extension multi-layer perceptr on, and by learning, an unknown formula set can be found.展开更多
In recent years, more and more foreigners begin to learn Chinese characters, but they often make typos when using Chinese. The fundamental reason is that they mainly learn Chinese characters from the glyph and pronunc...In recent years, more and more foreigners begin to learn Chinese characters, but they often make typos when using Chinese. The fundamental reason is that they mainly learn Chinese characters from the glyph and pronunciation, but do not master the semantics of Chinese characters. If they can understand the meaning of Chinese characters and form knowledge groups of the characters with relevant meanings, it can effectively improve learning efficiency. We achieve this goal by building a Chinese character semantic knowledge graph (CCSKG). In the process of building the knowledge graph, the semantic computing capacity of HowNet was utilized, and 104,187 associated edges were finally established for 6752 Chinese characters. Thanks to the development of deep learning, OpenHowNet releases the core data of HowNet and provides useful APIs for calculating the similarity between two words based on sememes. Therefore our method combines the advantages of data-driven and knowledge-driven. The proposed method treats Chinese sentences as subgraphs of the CCSKG and uses graph algorithms to correct Chinese typos and achieve good results. The experimental results show that compared with keras-bert and pycorrector + ernie, our method reduces the false acceptance rate by 38.28% and improves the recall rate by 40.91% in the field of learning Chinese as a foreign language. The CCSKG can help to promote Chinese overseas communication and international education.展开更多
The paper considers the problem of semantic processing of web documents by designing an approach, which combines extracted semantic document model and domain- related knowledge base. The knowledge base is populated wi...The paper considers the problem of semantic processing of web documents by designing an approach, which combines extracted semantic document model and domain- related knowledge base. The knowledge base is populated with learnt classification rules categorizing documents into topics. Classification provides for the reduction of the dimensio0ality of the document feature space. The semantic model of retrieved web documents is semantically labeled by querying domain ontology and processed with content-based classification method. The model obtained is mapped to the existing knowledge base by implementing inference algorithm. It enables models of the same semantic type to be recognized and integrated into the knowledge base. The approach provides for the domain knowledge integration and assists the extraction and modeling web documents semantics. Implementation results of the proposed approach are presented.展开更多
The aim of this work is mathematical education through the knowledge system and mathematical modeling. A net model of formation of mathematical knowledge as a deductive theory is suggested here. Within this model the ...The aim of this work is mathematical education through the knowledge system and mathematical modeling. A net model of formation of mathematical knowledge as a deductive theory is suggested here. Within this model the formation of deductive theory is represented as the development of a certain informational space, the elements of which are structured in the form of the orientated semantic net. This net is properly metrized and characterized by a certain system of coverings. It allows injecting net optimization parameters, regulating qualitative aspects of knowledge system under consideration. To regulate the creative processes of the formation and realization of mathematical know- edge, stochastic model of formation deductive theory is suggested here in the form of branching Markovian process, which is realized in the corresponding informational space as a semantic net. According to this stochastic model we can get correct foundation of criterion of optimization creative processes that leads to “great main points” strategy (GMP-strategy) in the process of realization of the effective control in the research work in the sphere of mathematics and its applications.展开更多
A knowledge graph consists of a set of interconnected typed entities and their attributes,which shows a better performance to organize,manage and understand knowledge.However,because knowledge graphs contain a lot of ...A knowledge graph consists of a set of interconnected typed entities and their attributes,which shows a better performance to organize,manage and understand knowledge.However,because knowledge graphs contain a lot of knowledge triples,it is difficult to directly display to researchers.Semantic Link Network is an attempt,and it can deal with the construction,representation and reasoning of semantics naturally.Based on the Semantic Link Network,this paper explores the representation and construction of knowledge graph,and develops an academic knowledge graph prototype system to realize the representation,construction and visualization of knowledge graph.展开更多
Joint learning of words and entities is advantageous to various NLP tasks, while most of the works focus on single language setting. Cross-lingual representations learning receives high attention recently, but is stil...Joint learning of words and entities is advantageous to various NLP tasks, while most of the works focus on single language setting. Cross-lingual representations learning receives high attention recently, but is still restricted by the availability of parallel data. In this paper, a method is proposed to jointly embed texts and entities on comparable data. In addition to evaluate on public semantic textual similarity datasets, a task (cross-lingual text extraction) was proposed to assess the similarities between texts and contribute to this dataset. It shows that the proposed method outperforms cross-lingual representations methods using parallel data on cross-lingual tasks, and achieves competitive results on mono-lingual tasks.展开更多
Given the everlasting significance of knowledge in society and academia,this article proposes a theoretical and methodological perspective on conceptualizing and investigating it.Specifically,it aims to explore the ep...Given the everlasting significance of knowledge in society and academia,this article proposes a theoretical and methodological perspective on conceptualizing and investigating it.Specifically,it aims to explore the epistemological attitude(EA)theory and its semantic approach to assessing sources of knowledge.The article provides a concise overview of the EA theory,which advocates for a systemic perspective on cognition and knowledge.It introduces and elaborates on the core concept and model,which serve as the foundation for the proposed methodology.This methodology suggests examining knowledge objects through subjective,contextual,and epistemological realms as multi-level knowledge constructs.Emphasizing the importance of semantics in studying knowledge,categories,and meanings,the article proposes an epistemological attitude towards sources of knowledge semantic questionnaire.The article delves into the methodology,reflecting on its four consecutive stages.It begins with the formal and substantive stages,which involve selecting sources,choosing academic experts as target participants,and developing content.The procedural stage follows,in which an expert review approach is employed to assess the content validity of the method.Finally,the article discusses the semantic method,elucidating its structure,features,semantic categories,and assessment procedure.The proposed method provides a unique contribution by enabling the analysis of the epistemological and socio-psychological meanings of sources,representing them as semantic constructs.展开更多
Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information ...Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.展开更多
With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of train...With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of training data using large pre-trained language models,which is a hardware threshold to accomplish this task.Some researchers have achieved competitive results with less training data through ingenious methods,such as utilizing information provided by the named entity recognition model.This paper presents a novel semantic-enhancement-based entity linking approach,named semantically enhanced hardware-friendly entity linking(SHEL),which is designed to be hardware friendly and efficient while maintaining good performance.Specifically,SHEL's semantic enhancement approach consists of three aspects:(1)semantic compression of entity descriptions using a text summarization model;(2)maximizing the capture of mention contexts using asymmetric heuristics;(3)calculating a fixed size mention representation through pooling operations.These series of semantic enhancement methods effectively improve the model's ability to capture semantic information while taking into account the hardware constraints,and significantly improve the model's convergence speed by more than 50%compared with the strong baseline model proposed in this paper.In terms of performance,SHEL is comparable to the previous method,with superior performance on six well-established datasets,even though SHEL is trained using a smaller pre-trained language model as the encoder.展开更多
This paper studies the linkage problem between the result of high-level synthesis and back-end technology, presents a method of high-level technology mapping based on knowl edge, and studies deeply all of its importan...This paper studies the linkage problem between the result of high-level synthesis and back-end technology, presents a method of high-level technology mapping based on knowl edge, and studies deeply all of its important links such as knowledge representation, knowledge utility and knowledge acquisition. It includes: (1) present a kind of expanded production about knowledge of circuit structure; (2) present a VHDL-based method to acquire knowledge of tech nology mapping; (3) provide solution control strategy and algorithm of knowledge utility; (4)present a half-automatic maintenance method, which can find redundance and contradiction of knowledge base; (5) present a practical method to embed the algorithm into knowledge system to decrease complexity of knowledge base. A system has been developed and linked with three kinds of technologies, so verified the work of this paper.展开更多
基金The National Natural Science Foundation of China(No.70471023).
文摘In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling, combining semantic web technologies, was proposed. A general configuration ontology was developed to provide a common concept structure for modeling configuration knowledge and rules of specific product domains. The OWL web ontology language and semantic web rule language (SWRL) were used to formally represent the configuration ontology, domain configuration knowledge and rules to enhance the consistency, maintainability and reusability of all the configuration knowledge. The configuration knowledge modeling of a customizable personal computer family shows that the approach can provide explicit, computerunderstandable knowledge semantics for specific product configuration domains and can efficiently support automatic configuration tasks of complex products.
基金The National High Technology Research and Devel-opment Program of China (863Program) (No2003AA1Z2560,2002AA414060)the Key Science and Technology Program of Shaanxi Province (No2006K04-G10)
文摘To deal with a lack of semantic interoperability of traditional knowledge retrieval approaches, a semantic-based networked manufacturing (NM) knowledge retrieval architecture is proposed, which offers a series of tools for supporting the sharing of knowledge and promoting NM collaboration. A 5-tuple based semantic information retrieval model is proposed, which includes the interoperation on the semantic layer, and a test process is given for this model. The recall ratio and the precision ratio of manufacturing knowledge retrieval are proved to be greatly improved by evaluation. Thus, a practical and reliable approach based on the semantic web is provided for solving the correlated concrete problems in regional networked manufacturing.
基金supported by the National Science Foundation of China(No.51575264)the Jiangsu Province Science Foundation for Excellent Youths under Grant BK20121011the Fundamental Research Funds for the Central Universities(No.NS2015050)
文摘The process inference cannot be achieved effectively by the traditional expert system,while the ontology and semantic technology could provide better solution to the knowledge acquisition and intelligent inference of expert system.The application mode of ontology and semantic technology on the process parameters recommendation are mainly investigated.Firstly,the content about ontology,semantic web rule language(SWRL)rules and the relative inference engine are introduced.Then,the inference method about process based on ontology technology and the SWRL rule is proposed.The construction method of process ontology base and the writing criterion of SWRL rule are described later.Finally,the results of inference are obtained.The mode raised could offer the reference to the construction of process knowledge base as well as the expert system's reusable process rule library.
文摘Computational techniques have been adopted in medi-cal and biological systems for a long time. There is no doubt that the development and application of computational methods will render great help in better understanding biomedical and biological functions. Large amounts of datasets have been produced by biomedical and biological experiments and simulations. In order for researchers to gain knowledge from origi- nal data, nontrivial transformation is necessary, which is regarded as a critical link in the chain of knowledge acquisition, sharing, and reuse. Challenges that have been encountered include: how to efficiently and effectively represent human knowledge in formal computing models, how to take advantage of semantic text mining techniques rather than traditional syntactic text mining, and how to handle security issues during the knowledge sharing and reuse. This paper summarizes the state-of-the-art in these research directions. We aim to provide readers with an introduction of major computing themes to be applied to the medical and biological research.
基金supported by the National Natural Science Foundation of China under Grants No.90920005,No.61003192the Key Project of Philosophy and Social Sciences Research,Ministry of Education under Grant No.08JZD0032+3 种基金the Program of Introducing Talents of Discipline to Universities under Grant No.B07042the Natural Science Foundation of Hubei Province under Grants No.2011CDA034,No.2009CDB145Chenguang Program of Wuhan Municipality under Grant No.201050231067the selfdetermined research funds of CCNU from the colleges' basic research and operation of MOE under Grants No.CCNU10A02009,No.CCNU10C01005
文摘This paper focuses on semantic knowl- edge acquisition from blogs with the proposed tag- topic model. The model extends the Latent Dirichlet Allocation (LDA) model by adding a tag layer be- tween the document and the topic. Each document is represented by a mixture of tags; each tag is as- sociated with a multinomial distribution over topics and each topic is associated with a multinomial dis- trNution over words. After parameter estimation, the tags are used to descrNe the underlying topics. Thus the latent semantic knowledge within the top- ics could be represented explicitly. The tags are treated as concepts, and the top-N words from the top topics are selected as related words of the con- cepts. Then PMI-IR is employed to compute the re- latedness between each tag-word pair and noisy words with low correlation removed to improve the quality of the semantic knowledge. Experiment re- sults show that the proposed method can effectively capture semantic knowledge, especially the polyse- me and synonym.
基金the National Natural Science Foundation of China No.61502528.
文摘Mining penetration testing semantic knowledge hidden in vast amounts of raw penetration testing data is of vital importance for automated penetration testing.Associative rule mining,a data mining technique,has been studied and explored for a long time.However,few studies have focused on knowledge discovery in the penetration testing area.The experimental result reveals that the long-tail distribution of penetration testing data nullifies the effectiveness of associative rule mining algorithms that are based on frequent pattern.To address this problem,a Bayesian inference based penetration semantic knowledge mining algorithm is proposed.First,a directed bipartite graph model,a kind of Bayesian network,is constructed to formalize penetration testing data.Then,we adopt the maximum likelihood estimate method to optimize the model parameters and decompose a large Bayesian network into smaller networks based on conditional independence of variables for improved solution efficiency.Finally,irrelevant variable elimination is adopted to extract penetration semantic knowledge from the conditional probability distribution of the model.The experimental results show that the proposed method can discover penetration semantic knowledge from raw penetration testing data effectively and efficiently.
文摘The rapid increase in the publication of knowledge bases as linked open data (LOD) warrants serious consideration from all concerned, as this phenomenon will potentially scale exponentially. This paper will briefly describe the evolution of the LOD, the emerging world-wide semantic web (WWSW), and explore the scalability and performance features Of the service oriented architecture that forms the foundation of the semantic technology platform developed at MIMOS Bhd., for addressing the challenges posed by the intelligent future internet. This paper" concludes with a review of the current status of the agriculture linked open data.
基金This work is financially supported by the Ministry of Earth Science(MoES),Government of India,(Grant.No.MoES/36/OOIS/Extra/45/2015),URL:https://www.moes.gov.in。
文摘The drastic growth of coastal observation sensors results in copious data that provide weather information.The intricacies in sensor-generated big data are heterogeneity and interpretation,driving high-end Information Retrieval(IR)systems.The Semantic Web(SW)can solve this issue by integrating data into a single platform for information exchange and knowledge retrieval.This paper focuses on exploiting the SWbase systemto provide interoperability through ontologies by combining the data concepts with ontology classes.This paper presents a 4-phase weather data model:data processing,ontology creation,SW processing,and query engine.The developed Oceanographic Weather Ontology helps to enhance data analysis,discovery,IR,and decision making.In addition to that,it also evaluates the developed ontology with other state-of-the-art ontologies.The proposed ontology’s quality has improved by 39.28%in terms of completeness,and structural complexity has decreased by 45.29%,11%and 37.7%in Precision and Accuracy.Indian Meteorological Satellite INSAT-3D’s ocean data is a typical example of testing the proposed model.The experimental result shows the effectiveness of the proposed data model and its advantages in machine understanding and IR.
文摘Abstract: It was discussed that the way to reflect the internal relations between judgment and identification, the two most fundamental ways of thinking or cognition operations, during the course of the semantic network knowledge representation processing. A new extended Petri net is defined based on qualitative mapping, which strengths the expressive ability of the feature of thinking and the mode of action of brain. A model of semantic network knowledge representation based on new Petri net is given. Semantic network knowledge has a more efficient representation and reasoning mechanism. This model not only can reflect the characteristics of associative memory in semantic network knowledge representation, but also can use Petri net to express the criterion changes and its change law of recognition judgment, especially the cognitive operation of thinking based on extraction and integration of sensory characteristics to well express the thinking transition course from quantitative change to qualitative change of human cognition.
文摘This paper proposes a method to construct conceptual semantic knowledge base of software engineering domain based on Wikipedia. First, it takes the concept of SWEBOK V3 as the standard to extract the interpretation of the concept from the Wikipedia, and extracts the keywords as the concept of semantic;Second, through the conceptual semantic knowledge base, it is formed by the relationship between the hierarchical relationship concept and the other text interpretation concept in the Wikipedia. Finally, the semantic similarity between concepts is calculated by the random walk algorithm for the construction of the conceptual semantic knowledge base. The semantic similarity of knowledge base constructed by this method can reach more than 84%, and the effectiveness of the proposed method is verified.
文摘The paper presents an extension multi-laye r p erceptron model that is capable of representing and reasoning propositional know ledge base. An extended version of propositional calculus is developed, and its some properties is discussed. Formulas of the extended calculus can be expressed in the extension multi-layer perceptron. Naturally, semantic deduction of prop ositional knowledge base can be implement by the extension multi-layer perceptr on, and by learning, an unknown formula set can be found.
文摘In recent years, more and more foreigners begin to learn Chinese characters, but they often make typos when using Chinese. The fundamental reason is that they mainly learn Chinese characters from the glyph and pronunciation, but do not master the semantics of Chinese characters. If they can understand the meaning of Chinese characters and form knowledge groups of the characters with relevant meanings, it can effectively improve learning efficiency. We achieve this goal by building a Chinese character semantic knowledge graph (CCSKG). In the process of building the knowledge graph, the semantic computing capacity of HowNet was utilized, and 104,187 associated edges were finally established for 6752 Chinese characters. Thanks to the development of deep learning, OpenHowNet releases the core data of HowNet and provides useful APIs for calculating the similarity between two words based on sememes. Therefore our method combines the advantages of data-driven and knowledge-driven. The proposed method treats Chinese sentences as subgraphs of the CCSKG and uses graph algorithms to correct Chinese typos and achieve good results. The experimental results show that compared with keras-bert and pycorrector + ernie, our method reduces the false acceptance rate by 38.28% and improves the recall rate by 40.91% in the field of learning Chinese as a foreign language. The CCSKG can help to promote Chinese overseas communication and international education.
文摘The paper considers the problem of semantic processing of web documents by designing an approach, which combines extracted semantic document model and domain- related knowledge base. The knowledge base is populated with learnt classification rules categorizing documents into topics. Classification provides for the reduction of the dimensio0ality of the document feature space. The semantic model of retrieved web documents is semantically labeled by querying domain ontology and processed with content-based classification method. The model obtained is mapped to the existing knowledge base by implementing inference algorithm. It enables models of the same semantic type to be recognized and integrated into the knowledge base. The approach provides for the domain knowledge integration and assists the extraction and modeling web documents semantics. Implementation results of the proposed approach are presented.
文摘The aim of this work is mathematical education through the knowledge system and mathematical modeling. A net model of formation of mathematical knowledge as a deductive theory is suggested here. Within this model the formation of deductive theory is represented as the development of a certain informational space, the elements of which are structured in the form of the orientated semantic net. This net is properly metrized and characterized by a certain system of coverings. It allows injecting net optimization parameters, regulating qualitative aspects of knowledge system under consideration. To regulate the creative processes of the formation and realization of mathematical know- edge, stochastic model of formation deductive theory is suggested here in the form of branching Markovian process, which is realized in the corresponding informational space as a semantic net. According to this stochastic model we can get correct foundation of criterion of optimization creative processes that leads to “great main points” strategy (GMP-strategy) in the process of realization of the effective control in the research work in the sphere of mathematics and its applications.
文摘A knowledge graph consists of a set of interconnected typed entities and their attributes,which shows a better performance to organize,manage and understand knowledge.However,because knowledge graphs contain a lot of knowledge triples,it is difficult to directly display to researchers.Semantic Link Network is an attempt,and it can deal with the construction,representation and reasoning of semantics naturally.Based on the Semantic Link Network,this paper explores the representation and construction of knowledge graph,and develops an academic knowledge graph prototype system to realize the representation,construction and visualization of knowledge graph.
文摘Joint learning of words and entities is advantageous to various NLP tasks, while most of the works focus on single language setting. Cross-lingual representations learning receives high attention recently, but is still restricted by the availability of parallel data. In this paper, a method is proposed to jointly embed texts and entities on comparable data. In addition to evaluate on public semantic textual similarity datasets, a task (cross-lingual text extraction) was proposed to assess the similarities between texts and contribute to this dataset. It shows that the proposed method outperforms cross-lingual representations methods using parallel data on cross-lingual tasks, and achieves competitive results on mono-lingual tasks.
基金This research was funded by the ESF Project No.8.2.2.0/20/I/003“Strengthening of Professional Competence of Daugavpils University Academic Personnel of Strategic Specialization Branches 3rd Call”,Nr.14-85/14-2022/10.
文摘Given the everlasting significance of knowledge in society and academia,this article proposes a theoretical and methodological perspective on conceptualizing and investigating it.Specifically,it aims to explore the epistemological attitude(EA)theory and its semantic approach to assessing sources of knowledge.The article provides a concise overview of the EA theory,which advocates for a systemic perspective on cognition and knowledge.It introduces and elaborates on the core concept and model,which serve as the foundation for the proposed methodology.This methodology suggests examining knowledge objects through subjective,contextual,and epistemological realms as multi-level knowledge constructs.Emphasizing the importance of semantics in studying knowledge,categories,and meanings,the article proposes an epistemological attitude towards sources of knowledge semantic questionnaire.The article delves into the methodology,reflecting on its four consecutive stages.It begins with the formal and substantive stages,which involve selecting sources,choosing academic experts as target participants,and developing content.The procedural stage follows,in which an expert review approach is employed to assess the content validity of the method.Finally,the article discusses the semantic method,elucidating its structure,features,semantic categories,and assessment procedure.The proposed method provides a unique contribution by enabling the analysis of the epistemological and socio-psychological meanings of sources,representing them as semantic constructs.
基金supported in part by the Beijing Natural Science Foundation under Grants L211020 and M21032in part by the National Natural Science Foundation of China under Grants U1836106 and 62271045in part by the Scientific and Technological Innovation Foundation of Foshan under Grants BK21BF001 and BK20BF010。
文摘Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.
基金the Beijing Municipal Science and Technology Program(Z231100001323004)。
文摘With the help of pre-trained language models,the accuracy of the entity linking task has made great strides in recent years.However,most models with excellent performance require fine-tuning on a large amount of training data using large pre-trained language models,which is a hardware threshold to accomplish this task.Some researchers have achieved competitive results with less training data through ingenious methods,such as utilizing information provided by the named entity recognition model.This paper presents a novel semantic-enhancement-based entity linking approach,named semantically enhanced hardware-friendly entity linking(SHEL),which is designed to be hardware friendly and efficient while maintaining good performance.Specifically,SHEL's semantic enhancement approach consists of three aspects:(1)semantic compression of entity descriptions using a text summarization model;(2)maximizing the capture of mention contexts using asymmetric heuristics;(3)calculating a fixed size mention representation through pooling operations.These series of semantic enhancement methods effectively improve the model's ability to capture semantic information while taking into account the hardware constraints,and significantly improve the model's convergence speed by more than 50%compared with the strong baseline model proposed in this paper.In terms of performance,SHEL is comparable to the previous method,with superior performance on six well-established datasets,even though SHEL is trained using a smaller pre-trained language model as the encoder.
文摘This paper studies the linkage problem between the result of high-level synthesis and back-end technology, presents a method of high-level technology mapping based on knowl edge, and studies deeply all of its important links such as knowledge representation, knowledge utility and knowledge acquisition. It includes: (1) present a kind of expanded production about knowledge of circuit structure; (2) present a VHDL-based method to acquire knowledge of tech nology mapping; (3) provide solution control strategy and algorithm of knowledge utility; (4)present a half-automatic maintenance method, which can find redundance and contradiction of knowledge base; (5) present a practical method to embed the algorithm into knowledge system to decrease complexity of knowledge base. A system has been developed and linked with three kinds of technologies, so verified the work of this paper.