Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent...Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent semantic information in the research of SCC-based networks.In previous research,researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification.However,the content of semantic information is quite complex.Although graph convolutional neural networks provide an effective solution for node classification tasks,due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures,the extracted feature information is subject to varying degrees of loss.Therefore,this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network.The Bidirectional Encoder Representations from Transformers(BERT)training word vector is introduced to extract the semantic features in the network,and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network.A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification.We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.展开更多
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ...Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.展开更多
Data aggregation from various web sources is very significant for web data analysis domain. In ad- dition, the recognition of coherence micro cluster is one of the most interesting issues in the field of data aggregat...Data aggregation from various web sources is very significant for web data analysis domain. In ad- dition, the recognition of coherence micro cluster is one of the most interesting issues in the field of data aggregation. Until now, many algorithms have been proposed to work on this issue. However, the deficiency of these solutions is that they cannot recognize the micro-cluster data stream accurately. A semantic-based coherent micro-cluster recognition algorithm for hybrid web data stream is nronosed.Firstly, an objective function is proposed to recognize the coherence micro-cluster and then the coher- ence micro-cluster recognition algorithm for hybrid web data stream based on semantic is raised. Fi-展开更多
Based on SCR(Software Cost Reduction), this paper presents a formal mOdel analyzingSCR-style requirements- This model mainly apply state trans1ation rules, semantic computing rules and attributes to define formal seme...Based on SCR(Software Cost Reduction), this paper presents a formal mOdel analyzingSCR-style requirements- This model mainly apply state trans1ation rules, semantic computing rules and attributes to define formal sementics of a tabular notation in the SCR requirements method, and may automatically analyze requirements specifications to be specified by the SCR method. Combining with a simp1eexample, this paper introduces how to analyze consistency and completeness of requirements specifica-tlons.展开更多
Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pa...Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pattern and ensures workloads are operating at optimum levels. The analysis process involves data query and extraction, data analysis, and result visualization. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate, and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis become the knowledge of the system and should be shared and communicated. This paper presents our cloud service architecture that explores a search cluster for data indexing and query. We develop REST APIs that the data can be accessed by different analysis modules. This architecture enables extensions to integrate with software frameworks of both batch processing(such as Hadoop) and stream processing(such as Spark) of big data. The analysis results are structured in Semantic Media Wiki pages in the context of the monitoring data source and the analysis process. This cloud architecture is empirically assessed to evaluate its responsiveness when processing a large set of data records under node failures.展开更多
基金supported by National Natural Science Foundation of China(62101088,61801076,61971336)Natural Science Foundation of Liaoning Province(2022-MS-157,2023-MS-108)+1 种基金Key Laboratory of Big Data Intelligent Computing Funds for Chongqing University of Posts and Telecommunications(BDIC-2023-A-003)Fundamental Research Funds for the Central Universities(3132022230).
文摘Interconnection of all things challenges the traditional communication methods,and Semantic Communication and Computing(SCC)will become new solutions.It is a challenging task to accurately detect,extract,and represent semantic information in the research of SCC-based networks.In previous research,researchers usually use convolution to extract the feature information of a graph and perform the corresponding task of node classification.However,the content of semantic information is quite complex.Although graph convolutional neural networks provide an effective solution for node classification tasks,due to their limitations in representing multiple relational patterns and not recognizing and analyzing higher-order local structures,the extracted feature information is subject to varying degrees of loss.Therefore,this paper extends from a single-layer topology network to a multi-layer heterogeneous topology network.The Bidirectional Encoder Representations from Transformers(BERT)training word vector is introduced to extract the semantic features in the network,and the existing graph neural network is improved by combining the higher-order local feature module of the network model representation network.A multi-layer network embedding algorithm on SCC-based networks with motifs is proposed to complete the task of end-to-end node classification.We verify the effectiveness of the algorithm on a real multi-layer heterogeneous network.
文摘Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation.
基金Supported by the National High Technology Research and Development Programme of China(No.2011AA120300,2011AA120302)the National Key Technology Support Program of China(No.2013BAH66F02)
文摘Data aggregation from various web sources is very significant for web data analysis domain. In ad- dition, the recognition of coherence micro cluster is one of the most interesting issues in the field of data aggregation. Until now, many algorithms have been proposed to work on this issue. However, the deficiency of these solutions is that they cannot recognize the micro-cluster data stream accurately. A semantic-based coherent micro-cluster recognition algorithm for hybrid web data stream is nronosed.Firstly, an objective function is proposed to recognize the coherence micro-cluster and then the coher- ence micro-cluster recognition algorithm for hybrid web data stream based on semantic is raised. Fi-
文摘Based on SCR(Software Cost Reduction), this paper presents a formal mOdel analyzingSCR-style requirements- This model mainly apply state trans1ation rules, semantic computing rules and attributes to define formal sementics of a tabular notation in the SCR requirements method, and may automatically analyze requirements specifications to be specified by the SCR method. Combining with a simp1eexample, this paper introduces how to analyze consistency and completeness of requirements specifica-tlons.
基金supported by the Discovery grant No.RGPIN 2014-05254 from Natural Science&Engineering Research Council(NSERC),Canada
文摘Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pattern and ensures workloads are operating at optimum levels. The analysis process involves data query and extraction, data analysis, and result visualization. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate, and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis become the knowledge of the system and should be shared and communicated. This paper presents our cloud service architecture that explores a search cluster for data indexing and query. We develop REST APIs that the data can be accessed by different analysis modules. This architecture enables extensions to integrate with software frameworks of both batch processing(such as Hadoop) and stream processing(such as Spark) of big data. The analysis results are structured in Semantic Media Wiki pages in the context of the monitoring data source and the analysis process. This cloud architecture is empirically assessed to evaluate its responsiveness when processing a large set of data records under node failures.