Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been ...Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.展开更多
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca...Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.展开更多
The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries an...The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.展开更多
The relationship between users and items,which cannot be recovered by traditional techniques,can be extracted by the recommendation algorithm based on the graph convolution network.The current simple linear combinatio...The relationship between users and items,which cannot be recovered by traditional techniques,can be extracted by the recommendation algorithm based on the graph convolution network.The current simple linear combination of these algorithms may not be sufficient to extract the complex structure of user interaction data.This paper presents a new approach to address such issues,utilizing the graph convolution network to extract association relations.The proposed approach mainly includes three modules:Embedding layer,forward propagation layer,and score prediction layer.The embedding layer models users and items according to their interaction information and generates initial feature vectors as input for the forward propagation layer.The forward propagation layer designs two parallel graph convolution networks with self-connections,which extract higher-order association relevance from users and items separately by multi-layer graph convolution.Furthermore,the forward propagation layer integrates the attention factor to assign different weights among the hop neighbors of the graph convolution network fusion,capturing more comprehensive association relevance between users and items as input for the score prediction layer.The score prediction layer introduces MLP(multi-layer perceptron)to conduct non-linear feature interaction between users and items,respectively.Finally,the prediction score of users to items is obtained.The recall rate and normalized discounted cumulative gain were used as evaluation indexes.The proposed approach effectively integrates higher-order information in user entries,and experimental analysis demonstrates its superiority over the existing algorithms.展开更多
Traffic flow prediction plays a key role in the construction of intelligent transportation system.However,due to its complex spatio-temporal dependence and its uncertainty,the research becomes very challenging.Most of...Traffic flow prediction plays a key role in the construction of intelligent transportation system.However,due to its complex spatio-temporal dependence and its uncertainty,the research becomes very challenging.Most of the existing studies are based on graph neural networks that model traffic flow graphs and try to use fixed graph structure to deal with the relationship between nodes.However,due to the time-varying spatial correlation of the traffic network,there is no fixed node relationship,and these methods cannot effectively integrate the temporal and spatial features.This paper proposes a novel temporal-spatial dynamic graph convolutional network(TSADGCN).The dynamic time warping algorithm(DTW)is introduced to calculate the similarity of traffic flow sequence among network nodes in the time dimension,and the spatiotemporal graph of traffic flow is constructed to capture the spatiotemporal characteristics and dependencies of traffic flow.By combining graph attention network and time attention network,a spatiotemporal convolution block is constructed to capture spatiotemporal characteristics of traffic data.Experiments on open data sets PEMSD4 and PEMSD8 show that TSADGCN has higher prediction accuracy than well-known traffic flow prediction algorithms.展开更多
The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous net...The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous network architectures.Despite its strategic importance,the UWSOS network is highly susceptible to hostile infiltrations,which significantly impede its battlefield recovery capabilities.Existing methods to enhance network resilience predominantly focus on basic graph relationships,neglecting the crucial higher-order dependencies among nodes necessary for capturing multi-hop meta-paths within the UWSOS.To address these limitations,we propose the Enhanced-Resilience Multi-Layer Attention Graph Convolutional Network(E-MAGCN),designed to augment the adaptability of UWSOS.Our approach employs BERT for extracting semantic insights from nodes and edges,thereby refining feature representations by leveraging various node and edge categories.Additionally,E-MAGCN integrates a regularization-based multi-layer attention mechanism and a semantic node fusion algo-rithm within the Graph Convolutional Network(GCN)framework.Through extensive simulation experiments,our model demonstrates an enhancement in resilience performance ranging from 1.2% to 7% over existing algorithms.展开更多
Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread a...Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets.展开更多
The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications ...The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications services,the implementation of the number portability policy,and the intensifying competition among operators.At the same time,users'consumption preferences and choices are evolving.Excellent churn prediction models must be created in order to accurately predict the churn tendency,since keeping existing customers is far less expensive than acquiring new ones.But conventional or learning-based algorithms can only go so far into a single subscriber's data;they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features.Additionally,the current churn prediction models have a high computational burden,a fuzzy weight distribution,and significant resource economic costs.The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures,ignoring the reference value supplied by other users with the same package.This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network(GAT-CNN)to address the aforementioned issues.The main contributions of this paper are as follows:Firstly,we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device,edge,and cloud layers.Second,we extend the use of users'own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously.Lastly,we build an integrated offline-online system for churn prediction based on the strengths of the two models,and we experimentally validate the efficacy of cloudside collaborative training and inference.In summary,the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.展开更多
Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,...Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and...In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.展开更多
Cybersecurity has become the most significant research area in the domain of the Internet of Things(IoT)owing to the ever-increasing number of cyberattacks.The rapid penetration of Android platforms in mobile devices ...Cybersecurity has become the most significant research area in the domain of the Internet of Things(IoT)owing to the ever-increasing number of cyberattacks.The rapid penetration of Android platforms in mobile devices has made the detection of malware attacks a challenging process.Furthermore,Android malware is increasing on a daily basis.So,precise malware detection analytical techniques need a large number of hardware resources that are signifi-cantly resource-limited for mobile devices.In this research article,an optimal Graph Convolutional Neural Network-based Malware Detection and classification(OGCNN-MDC)model is introduced for an IoT-cloud environment.The pro-posed OGCNN-MDC model aims to recognize and categorize malware occur-rences in IoT-enabled cloud platforms.The presented OGCNN-MDC model has three stages in total,such as data pre-processing,malware detection and para-meter tuning.To detect and classify the malware,the GCNN model is exploited in this work.In order to enhance the overall efficiency of the GCNN model,the Group Mean-based Optimizer(GMBO)algorithm is utilized to appropriately adjust the GCNN parameters,and this phenomenon shows the novelty of the cur-rent study.A widespread experimental analysis was conducted to establish the superiority of the proposed OGCNN-MDC model.A comprehensive comparison study was conducted,and the outcomes highlighted the supreme performance of the proposed OGCNN-MDC model over other recent approaches.展开更多
A significant advantage of medical image processing is that it allows non-invasive exploration of internal anatomy in great detail.It is possible to create and study 3D models of anatomical structures to improve treatm...A significant advantage of medical image processing is that it allows non-invasive exploration of internal anatomy in great detail.It is possible to create and study 3D models of anatomical structures to improve treatment outcomes,develop more effective medical devices,or arrive at a more accurate diagnosis.This paper aims to present a fused evolutionary algorithm that takes advantage of both whale optimization and bacterial foraging optimization to optimize feature extraction.The classification process was conducted with the aid of a convolu-tional neural network(CNN)with dual graphs.Evaluation of the performance of the fused model is carried out with various methods.In the initial input Com-puter Tomography(CT)image,150 images are pre-processed and segmented to identify cancerous and non-cancerous nodules.The geometrical,statistical,struc-tural,and texture features are extracted from the preprocessed segmented image using various methods such as Gray-level co-occurrence matrix(GLCM),Histo-gram-oriented gradient features(HOG),and Gray-level dependence matrix(GLDM).To select the optimal features,a novel fusion approach known as Whale-Bacterial Foraging Optimization is proposed.For the classification of lung cancer,dual graph convolutional neural networks have been employed.A com-parison of classification algorithms and optimization algorithms has been con-ducted.According to the evaluated results,the proposed fused algorithm is successful with an accuracy of 98.72%in predicting lung tumors,and it outper-forms other conventional approaches.展开更多
The continuous improvement of the cyber threat intelligence sharing mechanism provides new ideas to deal with Advanced Persistent Threats(APT).Extracting attack behaviors,i.e.,Tactics,Techniques,Procedures(TTP)from Cy...The continuous improvement of the cyber threat intelligence sharing mechanism provides new ideas to deal with Advanced Persistent Threats(APT).Extracting attack behaviors,i.e.,Tactics,Techniques,Procedures(TTP)from Cyber Threat Intelligence(CTI)can facilitate APT actors’profiling for an immediate response.However,it is difficult for traditional manual methods to analyze attack behaviors from cyber threat intelligence due to its heterogeneous nature.Based on the Adversarial Tactics,Techniques and Common Knowledge(ATT&CK)of threat behavior description,this paper proposes a threat behavioral knowledge extraction framework that integrates Heterogeneous Text Network(HTN)and Graph Convolutional Network(GCN)to solve this issue.It leverages the hierarchical correlation relationships of attack techniques and tactics in the ATT&CK to construct a text network of heterogeneous cyber threat intelligence.With the help of the Bidirectional EncoderRepresentation fromTransformers(BERT)pretraining model to analyze the contextual semantics of cyber threat intelligence,the task of threat behavior identification is transformed into a text classification task,which automatically extracts attack behavior in CTI,then identifies the malware and advanced threat actors.The experimental results show that F1 achieve 94.86%and 92.15%for the multi-label classification tasks of tactics and techniques.Extend the experiment to verify the method’s effectiveness in identifying the malware and threat actors in APT attacks.The F1 for malware and advanced threat actors identification task reached 98.45%and 99.48%,which are better than the benchmark model in the experiment and achieve state of the art.The model can effectivelymodel threat intelligence text data and acquire knowledge and experience migration by correlating implied features with a priori knowledge to compensate for insufficient sample data and improve the classification performance and recognition ability of threat behavior in text.展开更多
The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extrac...The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extract knowledge from these sources is imperative.Recently,the BlazePose system has been released for skeleton extraction from images oriented to mobile devices.With this skeleton graph representation in place,a Spatial-Temporal Graph Convolutional Network can be implemented to predict the action.We hypothesize that just by changing the skeleton input data for a different set of joints that offers more information about the action of interest,it is possible to increase the performance of the Spatial-Temporal Graph Convolutional Network for HAR tasks.Hence,in this study,we present the first implementation of the BlazePose skeleton topology upon this architecture for action recognition.Moreover,we propose the Enhanced-BlazePose topology that can achieve better results than its predecessor.Additionally,we propose different skeleton detection thresholds that can improve the accuracy performance even further.We reached a top-1 accuracy performance of 40.1%on the Kinetics dataset.For the NTU-RGB+D dataset,we achieved 87.59%and 92.1%accuracy for Cross-Subject and Cross-View evaluation criteria,respectively.展开更多
GitHub repository recommendation is a research hotspot in the field of open-source software. The current problemswith the repository recommendation systemare the insufficient utilization of open-source community infor...GitHub repository recommendation is a research hotspot in the field of open-source software. The current problemswith the repository recommendation systemare the insufficient utilization of open-source community informationand the fact that the scoring metrics used to calculate the matching degree between developers and repositoriesare developed manually and rely too much on human experience, leading to poor recommendation results. Toaddress these problems, we design a questionnaire to investigate which repository information developers focus onand propose a graph convolutional network-based repository recommendation system (GCNRec). First, to solveinsufficient information utilization in open-source communities, we construct a Developer-Repository networkusing four types of behavioral data that best reflect developers’ programming preferences and extract features ofdevelopers and repositories from the repository content that developers focus on. Then, we design a repositoryrecommendation model based on a multi-layer graph convolutional network to avoid the manual formulation ofscoringmetrics. Thismodel takes the Developer-Repository network, developer features and repository features asinputs, and recommends the top-k repositories that developers are most likely to be interested in by learning theirpreferences. We have verified the proposed GCNRec on the dataset, and by comparing it with other open-sourcerepository recommendation methods, GCNRec achieves higher precision and hit rate.展开更多
Aquatic medicine knowledge graph is an effective means to realize intelligent aquaculture.Graph completion technology is key to improving the quality of knowledge graph construction.However,the difficulty of semantic ...Aquatic medicine knowledge graph is an effective means to realize intelligent aquaculture.Graph completion technology is key to improving the quality of knowledge graph construction.However,the difficulty of semantic discrimination among similar entities and inconspicuous semantic features result in low accuracy when completing aquatic medicine knowledge graph with complex relationships.In this study,an aquatic medicine knowledge graph completion method(TransH+HConvAM)is proposed.Firstly,TransH is applied to split the vector plane between entities and relations,ameliorating the poor completion effect caused by low semantic resolution of entities.Then,hybrid convolution is introduced to obtain the global interaction of triples based on the complete interaction between head/tail entities and relations,which improves the semantic features of triples and enhances the completion effect of complex relationships in the graph.Experiments are conducted to verify the performance of the proposed method.The MR,MRR and Hit@10 of the TransH+HConvAM are found to be 674,0.339,and 0.361,respectively.This study shows that the model effectively overcomes the poor completion effect of complex relationships and improves the construction quality of the aquatic medicine knowledge graph,providing technical support for intelligent aquaculture.展开更多
With the development of social media and the prevalence of mobile devices,an increasing number of people tend to use social media platforms to express their opinions and attitudes,leading to many online controversies....With the development of social media and the prevalence of mobile devices,an increasing number of people tend to use social media platforms to express their opinions and attitudes,leading to many online controversies.These online controversies can severely threaten social stability,making automatic detection of controversies particularly necessary.Most controversy detection methods currently focus on mining features from text semantics and propagation structures.However,these methods have two drawbacks:1)limited ability to capture structural features and failure to learn deeper structural features,and 2)neglecting the influence of topic information and ineffective utilization of topic features.In light of these phenomena,this paper proposes a social media controversy detection method called Dual Feature Enhanced Graph Convolutional Network(DFE-GCN).This method explores structural information at different scales from global and local perspectives to capture deeper structural features,enhancing the expressive power of structural features.Furthermore,to strengthen the influence of topic information,this paper utilizes attention mechanisms to enhance topic features after each graph convolutional layer,effectively using topic information.We validated our method on two different public datasets,and the experimental results demonstrate that our method achieves state-of-the-art performance compared to baseline methods.On the Weibo and Reddit datasets,the accuracy is improved by 5.92%and 3.32%,respectively,and the F1 score is improved by 1.99%and 2.17%,demonstrating the positive impact of enhanced structural features and topic features on controversy detection.展开更多
Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word m...Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word most clearly expressing event occurrence.Thus,current approaches require both annotated triggers as well as event types in training data.Nevertheless,triggers are non-essential in ED,and it is time-wasting for annotators to identify the“most clearly”word from a sentence,particularly in longer sentences.To decrease manual effort,we evaluate event detectionwithout triggers.We propose a novel framework that combines Type-aware Attention and Graph Convolutional Networks(TA-GCN)for event detection.Specifically,the task is identified as a multi-label classification problem.We first encode the input sentence using a novel type-aware neural network with attention mechanisms.Then,a Graph Convolutional Networks(GCN)-based multilabel classification model is exploited for event detection.Experimental results demonstrate the effectiveness.展开更多
The traditional malware research is mainly based on its recognition and detection as a breakthrough point,without focusing on its propagation trends or predicting the subsequently infected nodes.The complexity of netw...The traditional malware research is mainly based on its recognition and detection as a breakthrough point,without focusing on its propagation trends or predicting the subsequently infected nodes.The complexity of network structure,diversity of network nodes,and sparsity of data all pose difficulties in predicting propagation.This paper proposes a malware propagation prediction model based on representation learning and Graph Convolutional Networks(GCN)to address the aforementioned problems.First,to solve the problem of the inaccuracy of infection intensity calculation caused by the sparsity of node interaction behavior data in the malware propagation network,a mechanism based on a tensor to mine the infection intensity among nodes is proposed to retain the network structure information.The influence of the relationship between nodes on the infection intensity is also analyzed.Second,given the diversity and complexity of the content and structure of infected and normal nodes in the network,considering the advantages of representation learning in data feature extraction,the corresponding representation learning method is adopted for the characteristics of infection intensity among nodes.This can efficiently calculate the relationship between entities and relationships in low dimensional space to achieve the goal of low dimensional,dense,and real-valued representation learning for the characteristics of propagation spatial data.We also design a new method,Tensor2vec,to learn the potential structural features of malware propagation.Finally,considering the convolution ability of GCN for non-Euclidean data,we propose a dynamic prediction model of malware propagation based on representation learning and GCN to solve the time effectiveness problem of the malware propagation carrier.The experimental results show that the proposed model can effectively predict the behaviors of the nodes in the network and discover the influence of different characteristics of nodes on the malware propagation situation.展开更多
基金This work was supported by the Kyonggi University Research Grant 2022.
文摘Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.
基金supported by the National Natural Science Foundation of China-China State Railway Group Co.,Ltd.Railway Basic Research Joint Fund (Grant No.U2268217)the Scientific Funding for China Academy of Railway Sciences Corporation Limited (No.2021YJ183).
文摘Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.
基金supported by the China Scholarship Council and the CERNET Innovation Project under grant No.20170111.
文摘The prediction for Multivariate Time Series(MTS)explores the interrelationships among variables at historical moments,extracts their relevant characteristics,and is widely used in finance,weather,complex industries and other fields.Furthermore,it is important to construct a digital twin system.However,existing methods do not take full advantage of the potential properties of variables,which results in poor predicted accuracy.In this paper,we propose the Adaptive Fused Spatial-Temporal Graph Convolutional Network(AFSTGCN).First,to address the problem of the unknown spatial-temporal structure,we construct the Adaptive Fused Spatial-Temporal Graph(AFSTG)layer.Specifically,we fuse the spatial-temporal graph based on the interrelationship of spatial graphs.Simultaneously,we construct the adaptive adjacency matrix of the spatial-temporal graph using node embedding methods.Subsequently,to overcome the insufficient extraction of disordered correlation features,we construct the Adaptive Fused Spatial-Temporal Graph Convolutional(AFSTGC)module.The module forces the reordering of disordered temporal,spatial and spatial-temporal dependencies into rule-like data.AFSTGCN dynamically and synchronously acquires potential temporal,spatial and spatial-temporal correlations,thereby fully extracting rich hierarchical feature information to enhance the predicted accuracy.Experiments on different types of MTS datasets demonstrate that the model achieves state-of-the-art single-step and multi-step performance compared with eight other deep learning models.
基金supported by the Fundamental Research Funds for Higher Education Institutions of Heilongjiang Province(145209126)the Heilongjiang Province Higher Education Teaching Reform Project under Grant No.SJGY20200770.
文摘The relationship between users and items,which cannot be recovered by traditional techniques,can be extracted by the recommendation algorithm based on the graph convolution network.The current simple linear combination of these algorithms may not be sufficient to extract the complex structure of user interaction data.This paper presents a new approach to address such issues,utilizing the graph convolution network to extract association relations.The proposed approach mainly includes three modules:Embedding layer,forward propagation layer,and score prediction layer.The embedding layer models users and items according to their interaction information and generates initial feature vectors as input for the forward propagation layer.The forward propagation layer designs two parallel graph convolution networks with self-connections,which extract higher-order association relevance from users and items separately by multi-layer graph convolution.Furthermore,the forward propagation layer integrates the attention factor to assign different weights among the hop neighbors of the graph convolution network fusion,capturing more comprehensive association relevance between users and items as input for the score prediction layer.The score prediction layer introduces MLP(multi-layer perceptron)to conduct non-linear feature interaction between users and items,respectively.Finally,the prediction score of users to items is obtained.The recall rate and normalized discounted cumulative gain were used as evaluation indexes.The proposed approach effectively integrates higher-order information in user entries,and experimental analysis demonstrates its superiority over the existing algorithms.
基金supported by the National Natural Science Foundation of China(Grant:62176086).
文摘Traffic flow prediction plays a key role in the construction of intelligent transportation system.However,due to its complex spatio-temporal dependence and its uncertainty,the research becomes very challenging.Most of the existing studies are based on graph neural networks that model traffic flow graphs and try to use fixed graph structure to deal with the relationship between nodes.However,due to the time-varying spatial correlation of the traffic network,there is no fixed node relationship,and these methods cannot effectively integrate the temporal and spatial features.This paper proposes a novel temporal-spatial dynamic graph convolutional network(TSADGCN).The dynamic time warping algorithm(DTW)is introduced to calculate the similarity of traffic flow sequence among network nodes in the time dimension,and the spatiotemporal graph of traffic flow is constructed to capture the spatiotemporal characteristics and dependencies of traffic flow.By combining graph attention network and time attention network,a spatiotemporal convolution block is constructed to capture spatiotemporal characteristics of traffic data.Experiments on open data sets PEMSD4 and PEMSD8 show that TSADGCN has higher prediction accuracy than well-known traffic flow prediction algorithms.
基金This research was supported by the Key Research and Development Program of Shaanxi Province(2024GX-YBXM-010)the National Science Foundation of China(61972302).
文摘The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous network architectures.Despite its strategic importance,the UWSOS network is highly susceptible to hostile infiltrations,which significantly impede its battlefield recovery capabilities.Existing methods to enhance network resilience predominantly focus on basic graph relationships,neglecting the crucial higher-order dependencies among nodes necessary for capturing multi-hop meta-paths within the UWSOS.To address these limitations,we propose the Enhanced-Resilience Multi-Layer Attention Graph Convolutional Network(E-MAGCN),designed to augment the adaptability of UWSOS.Our approach employs BERT for extracting semantic insights from nodes and edges,thereby refining feature representations by leveraging various node and edge categories.Additionally,E-MAGCN integrates a regularization-based multi-layer attention mechanism and a semantic node fusion algo-rithm within the Graph Convolutional Network(GCN)framework.Through extensive simulation experiments,our model demonstrates an enhancement in resilience performance ranging from 1.2% to 7% over existing algorithms.
文摘Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets.
基金supported by National Key R&D Program of China(No.2022YFB3104500)Natural Science Foundation of Jiangsu Province(No.BK20222013)Scientific Research Foundation of Nanjing Institute of Technology(No.3534113223036)。
文摘The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications services,the implementation of the number portability policy,and the intensifying competition among operators.At the same time,users'consumption preferences and choices are evolving.Excellent churn prediction models must be created in order to accurately predict the churn tendency,since keeping existing customers is far less expensive than acquiring new ones.But conventional or learning-based algorithms can only go so far into a single subscriber's data;they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features.Additionally,the current churn prediction models have a high computational burden,a fuzzy weight distribution,and significant resource economic costs.The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures,ignoring the reference value supplied by other users with the same package.This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network(GAT-CNN)to address the aforementioned issues.The main contributions of this paper are as follows:Firstly,we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device,edge,and cloud layers.Second,we extend the use of users'own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously.Lastly,we build an integrated offline-online system for churn prediction based on the strengths of the two models,and we experimentally validate the efficacy of cloudside collaborative training and inference.In summary,the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.
文摘Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.U20A20197,62306187the Foundation of Ministry of Industry and Information Technology TC220H05X-04.
文摘In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R237)Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4331004DSR13).
文摘Cybersecurity has become the most significant research area in the domain of the Internet of Things(IoT)owing to the ever-increasing number of cyberattacks.The rapid penetration of Android platforms in mobile devices has made the detection of malware attacks a challenging process.Furthermore,Android malware is increasing on a daily basis.So,precise malware detection analytical techniques need a large number of hardware resources that are signifi-cantly resource-limited for mobile devices.In this research article,an optimal Graph Convolutional Neural Network-based Malware Detection and classification(OGCNN-MDC)model is introduced for an IoT-cloud environment.The pro-posed OGCNN-MDC model aims to recognize and categorize malware occur-rences in IoT-enabled cloud platforms.The presented OGCNN-MDC model has three stages in total,such as data pre-processing,malware detection and para-meter tuning.To detect and classify the malware,the GCNN model is exploited in this work.In order to enhance the overall efficiency of the GCNN model,the Group Mean-based Optimizer(GMBO)algorithm is utilized to appropriately adjust the GCNN parameters,and this phenomenon shows the novelty of the cur-rent study.A widespread experimental analysis was conducted to establish the superiority of the proposed OGCNN-MDC model.A comprehensive comparison study was conducted,and the outcomes highlighted the supreme performance of the proposed OGCNN-MDC model over other recent approaches.
文摘A significant advantage of medical image processing is that it allows non-invasive exploration of internal anatomy in great detail.It is possible to create and study 3D models of anatomical structures to improve treatment outcomes,develop more effective medical devices,or arrive at a more accurate diagnosis.This paper aims to present a fused evolutionary algorithm that takes advantage of both whale optimization and bacterial foraging optimization to optimize feature extraction.The classification process was conducted with the aid of a convolu-tional neural network(CNN)with dual graphs.Evaluation of the performance of the fused model is carried out with various methods.In the initial input Com-puter Tomography(CT)image,150 images are pre-processed and segmented to identify cancerous and non-cancerous nodules.The geometrical,statistical,struc-tural,and texture features are extracted from the preprocessed segmented image using various methods such as Gray-level co-occurrence matrix(GLCM),Histo-gram-oriented gradient features(HOG),and Gray-level dependence matrix(GLDM).To select the optimal features,a novel fusion approach known as Whale-Bacterial Foraging Optimization is proposed.For the classification of lung cancer,dual graph convolutional neural networks have been employed.A com-parison of classification algorithms and optimization algorithms has been con-ducted.According to the evaluated results,the proposed fused algorithm is successful with an accuracy of 98.72%in predicting lung tumors,and it outper-forms other conventional approaches.
基金supported by China’s National Key R&D Program,No.2019QY1404the National Natural Science Foundation of China,Grant No.U20A20161,U1836103the Basic Strengthening Program Project,No.2019-JCJQ-ZD-113.
文摘The continuous improvement of the cyber threat intelligence sharing mechanism provides new ideas to deal with Advanced Persistent Threats(APT).Extracting attack behaviors,i.e.,Tactics,Techniques,Procedures(TTP)from Cyber Threat Intelligence(CTI)can facilitate APT actors’profiling for an immediate response.However,it is difficult for traditional manual methods to analyze attack behaviors from cyber threat intelligence due to its heterogeneous nature.Based on the Adversarial Tactics,Techniques and Common Knowledge(ATT&CK)of threat behavior description,this paper proposes a threat behavioral knowledge extraction framework that integrates Heterogeneous Text Network(HTN)and Graph Convolutional Network(GCN)to solve this issue.It leverages the hierarchical correlation relationships of attack techniques and tactics in the ATT&CK to construct a text network of heterogeneous cyber threat intelligence.With the help of the Bidirectional EncoderRepresentation fromTransformers(BERT)pretraining model to analyze the contextual semantics of cyber threat intelligence,the task of threat behavior identification is transformed into a text classification task,which automatically extracts attack behavior in CTI,then identifies the malware and advanced threat actors.The experimental results show that F1 achieve 94.86%and 92.15%for the multi-label classification tasks of tactics and techniques.Extend the experiment to verify the method’s effectiveness in identifying the malware and threat actors in APT attacks.The F1 for malware and advanced threat actors identification task reached 98.45%and 99.48%,which are better than the benchmark model in the experiment and achieve state of the art.The model can effectivelymodel threat intelligence text data and acquire knowledge and experience migration by correlating implied features with a priori knowledge to compensate for insufficient sample data and improve the classification performance and recognition ability of threat behavior in text.
文摘The ever-growing available visual data(i.e.,uploaded videos and pictures by internet users)has attracted the research community’s attention in the computer vision field.Therefore,finding efficient solutions to extract knowledge from these sources is imperative.Recently,the BlazePose system has been released for skeleton extraction from images oriented to mobile devices.With this skeleton graph representation in place,a Spatial-Temporal Graph Convolutional Network can be implemented to predict the action.We hypothesize that just by changing the skeleton input data for a different set of joints that offers more information about the action of interest,it is possible to increase the performance of the Spatial-Temporal Graph Convolutional Network for HAR tasks.Hence,in this study,we present the first implementation of the BlazePose skeleton topology upon this architecture for action recognition.Moreover,we propose the Enhanced-BlazePose topology that can achieve better results than its predecessor.Additionally,we propose different skeleton detection thresholds that can improve the accuracy performance even further.We reached a top-1 accuracy performance of 40.1%on the Kinetics dataset.For the NTU-RGB+D dataset,we achieved 87.59%and 92.1%accuracy for Cross-Subject and Cross-View evaluation criteria,respectively.
基金supported by Special Funds for the Construction of an Innovative Province of Hunan,No.2020GK2028.
文摘GitHub repository recommendation is a research hotspot in the field of open-source software. The current problemswith the repository recommendation systemare the insufficient utilization of open-source community informationand the fact that the scoring metrics used to calculate the matching degree between developers and repositoriesare developed manually and rely too much on human experience, leading to poor recommendation results. Toaddress these problems, we design a questionnaire to investigate which repository information developers focus onand propose a graph convolutional network-based repository recommendation system (GCNRec). First, to solveinsufficient information utilization in open-source communities, we construct a Developer-Repository networkusing four types of behavioral data that best reflect developers’ programming preferences and extract features ofdevelopers and repositories from the repository content that developers focus on. Then, we design a repositoryrecommendation model based on a multi-layer graph convolutional network to avoid the manual formulation ofscoringmetrics. Thismodel takes the Developer-Repository network, developer features and repository features asinputs, and recommends the top-k repositories that developers are most likely to be interested in by learning theirpreferences. We have verified the proposed GCNRec on the dataset, and by comparing it with other open-sourcerepository recommendation methods, GCNRec achieves higher precision and hit rate.
基金supported by the Key Laboratory of Environment Controlled Aquaculture(Dalian Ocean University)Ministry of Education(No.2021-MOEKLECA-KF-05)the National Natural Science Foundation of China Youth Science(No.61802046)。
文摘Aquatic medicine knowledge graph is an effective means to realize intelligent aquaculture.Graph completion technology is key to improving the quality of knowledge graph construction.However,the difficulty of semantic discrimination among similar entities and inconspicuous semantic features result in low accuracy when completing aquatic medicine knowledge graph with complex relationships.In this study,an aquatic medicine knowledge graph completion method(TransH+HConvAM)is proposed.Firstly,TransH is applied to split the vector plane between entities and relations,ameliorating the poor completion effect caused by low semantic resolution of entities.Then,hybrid convolution is introduced to obtain the global interaction of triples based on the complete interaction between head/tail entities and relations,which improves the semantic features of triples and enhances the completion effect of complex relationships in the graph.Experiments are conducted to verify the performance of the proposed method.The MR,MRR and Hit@10 of the TransH+HConvAM are found to be 674,0.339,and 0.361,respectively.This study shows that the model effectively overcomes the poor completion effect of complex relationships and improves the construction quality of the aquatic medicine knowledge graph,providing technical support for intelligent aquaculture.
基金funded by the Natural Science Foundation of China Grant No.202204120017the Autonomous Region Science and Technology Program Grant No.2022B01008-2the Autonomous Region Science and Technology Program Grant No.2020A02001-1.
文摘With the development of social media and the prevalence of mobile devices,an increasing number of people tend to use social media platforms to express their opinions and attitudes,leading to many online controversies.These online controversies can severely threaten social stability,making automatic detection of controversies particularly necessary.Most controversy detection methods currently focus on mining features from text semantics and propagation structures.However,these methods have two drawbacks:1)limited ability to capture structural features and failure to learn deeper structural features,and 2)neglecting the influence of topic information and ineffective utilization of topic features.In light of these phenomena,this paper proposes a social media controversy detection method called Dual Feature Enhanced Graph Convolutional Network(DFE-GCN).This method explores structural information at different scales from global and local perspectives to capture deeper structural features,enhancing the expressive power of structural features.Furthermore,to strengthen the influence of topic information,this paper utilizes attention mechanisms to enhance topic features after each graph convolutional layer,effectively using topic information.We validated our method on two different public datasets,and the experimental results demonstrate that our method achieves state-of-the-art performance compared to baseline methods.On the Weibo and Reddit datasets,the accuracy is improved by 5.92%and 3.32%,respectively,and the F1 score is improved by 1.99%and 2.17%,demonstrating the positive impact of enhanced structural features and topic features on controversy detection.
基金supported by the Hunan Provincial Natural Science Foundation of China(Grant No.2020JJ4624)the National Social Science Fund of China(Grant No.20&ZD047)+1 种基金the Scientific Research Fund of Hunan Provincial Education Department(Grant No.19A020)the National University of Defense Technology Research Project ZK20-46 and the Young Elite Scientists Sponsorship Program 2021-JCJQ-QT-050.
文摘Event detection(ED)is aimed at detecting event occurrences and categorizing them.This task has been previously solved via recognition and classification of event triggers(ETs),which are defined as the phrase or word most clearly expressing event occurrence.Thus,current approaches require both annotated triggers as well as event types in training data.Nevertheless,triggers are non-essential in ED,and it is time-wasting for annotators to identify the“most clearly”word from a sentence,particularly in longer sentences.To decrease manual effort,we evaluate event detectionwithout triggers.We propose a novel framework that combines Type-aware Attention and Graph Convolutional Networks(TA-GCN)for event detection.Specifically,the task is identified as a multi-label classification problem.We first encode the input sentence using a novel type-aware neural network with attention mechanisms.Then,a Graph Convolutional Networks(GCN)-based multilabel classification model is exploited for event detection.Experimental results demonstrate the effectiveness.
基金This research is partially supported by the National Natural Science Foundation of China(Grant No.61772098)Chongqing Technology Innovation and Application Development Project(Grant No.cstc2020jscxmsxmX0150)+2 种基金Chongqing Science and Technology Innovation Leading Talent Support Program(CSTCCXLJRC201908)Basic and Advanced Research Projects of CSTC(No.cstc2019jcyj-zdxmX0008)Science and Technology Research Program of Chongqing Municipal Education Commission(Grant No.KJZD-K201900605).
文摘The traditional malware research is mainly based on its recognition and detection as a breakthrough point,without focusing on its propagation trends or predicting the subsequently infected nodes.The complexity of network structure,diversity of network nodes,and sparsity of data all pose difficulties in predicting propagation.This paper proposes a malware propagation prediction model based on representation learning and Graph Convolutional Networks(GCN)to address the aforementioned problems.First,to solve the problem of the inaccuracy of infection intensity calculation caused by the sparsity of node interaction behavior data in the malware propagation network,a mechanism based on a tensor to mine the infection intensity among nodes is proposed to retain the network structure information.The influence of the relationship between nodes on the infection intensity is also analyzed.Second,given the diversity and complexity of the content and structure of infected and normal nodes in the network,considering the advantages of representation learning in data feature extraction,the corresponding representation learning method is adopted for the characteristics of infection intensity among nodes.This can efficiently calculate the relationship between entities and relationships in low dimensional space to achieve the goal of low dimensional,dense,and real-valued representation learning for the characteristics of propagation spatial data.We also design a new method,Tensor2vec,to learn the potential structural features of malware propagation.Finally,considering the convolution ability of GCN for non-Euclidean data,we propose a dynamic prediction model of malware propagation based on representation learning and GCN to solve the time effectiveness problem of the malware propagation carrier.The experimental results show that the proposed model can effectively predict the behaviors of the nodes in the network and discover the influence of different characteristics of nodes on the malware propagation situation.