With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the rou...With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the route network design problem,the expressive capability and search performance of the algorithm on multi-objective problems remain unexplored.In this paper,the wind farm layout optimization problem is defined.Then,a multi-objective algorithm based on Graph Neural Network(GNN)and Variable Neighborhood Search(VNS)algorithm is proposed.GNN provides the basis representations for the following search algorithm so that the expressiveness and search accuracy of the algorithm can be improved.The multi-objective VNS algorithm is put forward by combining it with the multi-objective optimization algorithm to solve the problem with multiple objectives.The proposed algorithm is applied to the 18-node simulation example to evaluate the feasibility and practicality of the developed optimization strategy.The experiment on the simulation example shows that the proposed algorithm yields a reduction of 6.1% in Point of Common Coupling(PCC)over the current state-of-the-art algorithm,which means that the proposed algorithm designs a layout that improves the quality of the power supply by 6.1%at the same cost.The ablation experiments show that the proposed algorithm improves the power quality by more than 8.6% and 7.8% compared to both the original VNS algorithm and the multi-objective VNS algorithm.展开更多
The traditional malware research is mainly based on its recognition and detection as a breakthrough point,without focusing on its propagation trends or predicting the subsequently infected nodes.The complexity of netw...The traditional malware research is mainly based on its recognition and detection as a breakthrough point,without focusing on its propagation trends or predicting the subsequently infected nodes.The complexity of network structure,diversity of network nodes,and sparsity of data all pose difficulties in predicting propagation.This paper proposes a malware propagation prediction model based on representation learning and Graph Convolutional Networks(GCN)to address the aforementioned problems.First,to solve the problem of the inaccuracy of infection intensity calculation caused by the sparsity of node interaction behavior data in the malware propagation network,a mechanism based on a tensor to mine the infection intensity among nodes is proposed to retain the network structure information.The influence of the relationship between nodes on the infection intensity is also analyzed.Second,given the diversity and complexity of the content and structure of infected and normal nodes in the network,considering the advantages of representation learning in data feature extraction,the corresponding representation learning method is adopted for the characteristics of infection intensity among nodes.This can efficiently calculate the relationship between entities and relationships in low dimensional space to achieve the goal of low dimensional,dense,and real-valued representation learning for the characteristics of propagation spatial data.We also design a new method,Tensor2vec,to learn the potential structural features of malware propagation.Finally,considering the convolution ability of GCN for non-Euclidean data,we propose a dynamic prediction model of malware propagation based on representation learning and GCN to solve the time effectiveness problem of the malware propagation carrier.The experimental results show that the proposed model can effectively predict the behaviors of the nodes in the network and discover the influence of different characteristics of nodes on the malware propagation situation.展开更多
Most existing network representation learning algorithms focus on network structures for learning.However,network structure is only one kind of view and feature for various networks,and it cannot fully reflect all cha...Most existing network representation learning algorithms focus on network structures for learning.However,network structure is only one kind of view and feature for various networks,and it cannot fully reflect all characteristics of networks.In fact,network vertices usually contain rich text information,which can be well utilized to learn text-enhanced network representations.Meanwhile,Matrix-Forest Index(MFI)has shown its high effectiveness and stability in link prediction tasks compared with other algorithms of link prediction.Both MFI and Inductive Matrix Completion(IMC)are not well applied with algorithmic frameworks of typical representation learning methods.Therefore,we proposed a novel semi-supervised algorithm,tri-party deep network representation learning using inductive matrix completion(TDNR).Based on inductive matrix completion algorithm,TDNR incorporates text features,the link certainty degrees of existing edges and the future link probabilities of non-existing edges into network representations.The experimental results demonstrated that TFNR outperforms other baselines on three real-world datasets.The visualizations of TDNR show that proposed algorithm is more discriminative than other unsupervised approaches.展开更多
A local and global context representation learning model for Chinese characters is designed and a Chinese word segmentation method based on character representations is proposed in this paper. First, the proposed Chin...A local and global context representation learning model for Chinese characters is designed and a Chinese word segmentation method based on character representations is proposed in this paper. First, the proposed Chinese character learning model uses the semanties of loeal context and global context to learn the representation of Chinese characters. Then, Chinese word segmentation model is built by a neural network, while the segmentation model is trained with the eharaeter representations as its input features. Finally, experimental results show that Chinese charaeter representations can effectively learn the semantic information. Characters with similar semantics cluster together in the visualize space. Moreover, the proposed Chinese word segmentation model also achieves a pretty good improvement on precision, recall and f-measure.展开更多
The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system f...The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system from the perspective of route network analysis,and the attribute information of airport nodes in the airport route network is not appropriately integrated into the airport network.In order to solve this problem,a multi-airport system homogeneity analysis method based on airport attribute network representation learning is proposed.Firstly,the route network of a multi-airport system with attribute information is constructed.If there are flights between airports,an edge is added between airports,and regional attribute information is added for each airport node.Secondly,the airport attributes and the airport network vector are represented respectively.The airport attributes and the airport network vector are embedded into the unified airport representation vector space by the network representation learning method,and then the airport vector integrating the airport attributes and the airport network characteristics is obtained.By calculating the similarity of the airport vectors,it is convenient to calculate the degree of homogeneity between airports and the homogeneity of the multi-airport system.The experimental results on the Beijing-Tianjin-Hebei multi-airport system show that,compared with other existing algorithms,the homogeneity analysis method based on attributed network representation learning can get more consistent results with the current situation of Beijing-Tianjin-Hebei multi-airport system.展开更多
There is a large amount of information in the network data that we canexploit. It is difficult for classical community detection algorithms to handle network data with sparse topology. Representation learning of netw...There is a large amount of information in the network data that we canexploit. It is difficult for classical community detection algorithms to handle network data with sparse topology. Representation learning of network data is usually paired with clustering algorithms to solve the community detection problem.Meanwhile, there is always an unpredictable distribution of class clusters outputby graph representation learning. Therefore, we propose an improved densitypeak clustering algorithm (ILDPC) for the community detection problem, whichimproves the local density mechanism in the original algorithm and can betteraccommodate class clusters of different shapes. And we study the communitydetection in network data. The algorithm is paired with the benchmark modelGraph sample and aggregate (GraphSAGE) to show the adaptability of ILDPCfor community detection. The plotted decision diagram shows that the ILDPCalgorithm is more discriminative in selecting density peak points compared tothe original algorithm. Finally, the performance of K-means and other clusteringalgorithms on this benchmark model is compared, and the algorithm is proved tobe more suitable for community detection in sparse networks with the benchmarkmodel on the evaluation criterion F1-score. The sensitivity of the parameters ofthe ILDPC algorithm to the low-dimensional vector set output by the benchmarkmodel GraphSAGE is also analyzed.展开更多
With the wide application of location-based social networks(LBSNs),personalized point of interest(POI)recommendation becomes popular,especially in the commercial field.Unfortunately,it is challenging to accurately rec...With the wide application of location-based social networks(LBSNs),personalized point of interest(POI)recommendation becomes popular,especially in the commercial field.Unfortunately,it is challenging to accurately recommend POIs to users because the user-POI matrix is extremely sparse.In addition,a user's check-in activities are affected by many influential factors.However,most of existing studies capture only few influential factors.It is hard for them to be extended to incorporate other heterogeneous information in a unified way.To address these problems,we propose a meta-path-based deep representation learning(MPDRL)model for personalized POI recommendation.In this model,we design eight types of meta-paths to fully utilize the rich heterogeneous information in LBSNs for the representations of users and POIs,and deeply mine the correlations between users and POIs.To further improve the recommendation performance,we design an attention-based long short-term memory(LSTM)network to learn the importance of different influential factors on a user's specific check-in activity.To verify the effectiveness of our proposed method,we conduct extensive experiments on a real-world dataset,Foursquare.Experimental results show that the MPDRL model improves at least 16.97%and 23.55%over all comparison methods in terms of the metric Precision@N(Pre@N)and Recall@N(Rec@N)respectively.展开更多
Objective To construct symptom-formula-herb heterogeneous graphs structured Treatise on Febrile Diseases(Shang Han Lun,《伤寒论》)dataset and explore an optimal learning method represented with node attributes based o...Objective To construct symptom-formula-herb heterogeneous graphs structured Treatise on Febrile Diseases(Shang Han Lun,《伤寒论》)dataset and explore an optimal learning method represented with node attributes based on graph convolutional network(GCN).Methods Clauses that contain symptoms,formulas,and herbs were abstracted from Treatise on Febrile Diseases to construct symptom-formula-herb heterogeneous graphs,which were used to propose a node representation learning method based on GCN−the Traditional Chinese Medicine Graph Convolution Network(TCM-GCN).The symptom-formula,symptom-herb,and formula-herb heterogeneous graphs were processed with the TCM-GCN to realize high-order propagating message passing and neighbor aggregation to obtain new node representation attributes,and thus acquiring the nodes’sum-aggregations of symptoms,formulas,and herbs to lay a foundation for the downstream tasks of the prediction models.Results Comparisons among the node representations with multi-hot encoding,non-fusion encoding,and fusion encoding showed that the Precision@10,Recall@10,and F1-score@10 of the fusion encoding were 9.77%,6.65%,and 8.30%,respectively,higher than those of the non-fusion encoding in the prediction studies of the model.Conclusion Node representations by fusion encoding achieved comparatively ideal results,indicating the TCM-GCN is effective in realizing node-level representations of heterogeneous graph structured Treatise on Febrile Diseases dataset and is able to elevate the performance of the downstream tasks of the diagnosis model.展开更多
In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owin...In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owing to the lack of accurately labeled network traffic data,many unsupervised feature representation learning models have been proposed with state-of-theart performance.Yet,these models fail to consider the classification error while learning the feature representation.Intuitively,the learnt feature representation may degrade the performance of the classification task.For the first time in the field of intrusion detection,this paper proposes an unsupervised IDS model leveraging the benefits of deep autoencoder(DAE)for learning the robust feature representation and one-class support vector machine(OCSVM)for finding the more compact decision hyperplane for intrusion detection.Specially,the proposed model defines a new unified objective function to minimize the reconstruction and classification error simultaneously.This unique contribution not only enables the model to support joint learning for feature representation and classifier training but also guides to learn the robust feature representation which can improve the discrimination ability of the classifier for intrusion detection.Three set of evaluation experiments are conducted to demonstrate the potential of the proposed model.First,the ablation evaluation on benchmark dataset,NSL-KDD validates the design decision of the proposed model.Next,the performance evaluation on recent intrusion dataset,UNSW-NB15 signifies the stable performance of the proposed model.Finally,the comparative evaluation verifies the efficacy of the proposed model against recently published state-of-the-art methods.展开更多
Artificial intelligence generated content(AIGC)has emerged as an indispensable tool for producing large-scale content in various forms,such as images,thanks to the significant role that AI plays in imitation and produ...Artificial intelligence generated content(AIGC)has emerged as an indispensable tool for producing large-scale content in various forms,such as images,thanks to the significant role that AI plays in imitation and production.However,interpretability and controllability remain challenges.Existing AI methods often face challenges in producing images that are both flexible and controllable while considering causal relationships within the images.To address this issue,we have developed a novel method for causal controllable image generation(CCIG)that combines causal representation learning with bi-directional generative adversarial networks(GANs).This approach enables humans to control image attributes while considering the rationality and interpretability of the generated images and also allows for the generation of counterfactual images.The key of our approach,CCIG,lies in the use of a causal structure learning module to learn the causal relationships between image attributes and joint optimization with the encoder,generator,and joint discriminator in the image generation module.By doing so,we can learn causal representations in image’s latent space and use causal intervention operations to control image generation.We conduct extensive experiments on a real-world dataset,CelebA.The experimental results illustrate the effectiveness of CCIG.展开更多
Metapaths with specific complex semantics are critical to learning diverse semantic and structural information of heterogeneous networks(HNs)for most of the existing representation learning models.However,any metapath...Metapaths with specific complex semantics are critical to learning diverse semantic and structural information of heterogeneous networks(HNs)for most of the existing representation learning models.However,any metapaths consisting of multiple,simple metarelations must be driven by domain experts.These sensitive,expensive,and limited metapaths severely reduce the flexibility and scalability of the existing models.A metapath-free,scalable representation learning model,called Metarelation2vec,is proposed for HNs with biased joint learning of all metarelations in a bid to address this problem.Specifically,a metarelation-aware,biased walk strategy is first designed to obtain better training samples by using autogenerating cooperation probabilities for all metarelations rather than using expert-given metapaths.Thereafter,grouped nodes by the type,a common and shallow skip-gram model is used to separately learn structural proximity for each node type.Next,grouped links by the type,a novel and shallow model is used to separately learn the semantic proximity for each link type.Finally,supervised by the cooperation probabilities of all meta-words,the biased training samples are thrown into the shallow models to jointly learn the structural and semantic information in the HNs,ensuring the accuracy and scalability of the models.Extensive experimental results on three tasks and four open datasets demonstrate the advantages of our proposed model.展开更多
Nowadays,the personalized recommendation has become a research hotspot for addressing information overload.Despite this,generating effective recommendations from sparse data remains a challenge.Recently,auxiliary info...Nowadays,the personalized recommendation has become a research hotspot for addressing information overload.Despite this,generating effective recommendations from sparse data remains a challenge.Recently,auxiliary information has been widely used to address data sparsity,but most models using auxiliary information are linear and have limited expressiveness.Due to the advantages of feature extraction and no-label requirements,autoencoder-based methods have become quite popular.However,most existing autoencoder-based methods discard the reconstruction of auxiliary information,which poses huge challenges for better representation learning and model scalability.To address these problems,we propose Serial-Autoencoder for Personalized Recommendation(SAPR),which aims to reduce the loss of critical information and enhance the learning of feature representations.Specifically,we first combine the original rating matrix and item attribute features and feed them into the first autoencoder for generating a higher-level representation of the input.Second,we use a second autoencoder to enhance the reconstruction of the data representation of the prediciton rating matrix.The output rating information is used for recommendation prediction.Extensive experiments on the MovieTweetings and MovieLens datasets have verified the effectiveness of SAPR compared to state-of-the-art models.展开更多
The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain.The k...The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain.The key bottleneck in unsupervised domain adaptation is how to obtain higher-level and more abstract feature representations between source and target domains which can bridge the chasm of domain discrepancy.Recently,deep learning methods based on autoencoder have achieved sound performance in representation learning,and many dual or serial autoencoderbased methods take different characteristics of data into consideration for improving the effectiveness of unsupervised domain adaptation.However,most existing methods of autoencoders just serially connect the features generated by different autoencoders,which pose challenges for the discriminative representation learning and fail to find the real cross-domain features.To address this problem,we propose a novel representation learning method based on an integrated autoencoders for unsupervised domain adaptation,called IAUDA.To capture the inter-and inner-domain features of the raw data,two different autoencoders,which are the marginalized autoencoder with maximum mean discrepancy(mAE)and convolutional autoencoder(CAE)respectively,are proposed to learn different feature representations.After higher-level features are obtained by these two different autoencoders,a sparse autoencoder is introduced to compact these inter-and inner-domain representations.In addition,a whitening layer is embedded for features processed before the mAE to reduce redundant features inside a local area.Experimental results demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.展开更多
To leverage the enormous amount of unlabeled data on distributed edge devices,we formulate a new problem in federated learning called federated unsupervised representation learning(FURL)to learn a common representatio...To leverage the enormous amount of unlabeled data on distributed edge devices,we formulate a new problem in federated learning called federated unsupervised representation learning(FURL)to learn a common representation model without supervision while preserving data privacy.FURL poses two new challenges:(1)data distribution shift(non-independent and identically distributed,non-IID)among clients would make local models focus on different categories,leading to the inconsistency of representation spaces;(2)without unified information among the clients in FURL,the representations across clients would be misaligned.To address these challenges,we propose the federated contrastive averaging with dictionary and alignment(FedCA)algorithm.FedCA is composed of two key modules:a dictionary module to aggregate the representations of samples from each client which can be shared with all clients for consistency of representation space and an alignment module to align the representation of each client on a base model trained on public data.We adopt the contrastive approach for local model training.Through extensive experiments with three evaluation protocols in IID and non-IID settings,we demonstrate that FedCA outperforms all baselines with significant margins.展开更多
When various urban functions are integrated into one location,they form a mixture of functions.The emerging big data promote an alternative way to identify mixed functions.However,current methods are largely unable to...When various urban functions are integrated into one location,they form a mixture of functions.The emerging big data promote an alternative way to identify mixed functions.However,current methods are largely unable to extract deep features in these data,resulting in low accuracy.In this study,we focused on recognizing mixed urban functions from the perspective of human activities,which are essential indicators of functional areas in a city.We proposed a framework to comprehensively extract deep features of human activities in big data,including activity dynamics,mobility interactions,and activity semantics,through representation learning methods.Then,integrating these features,we employed fuzzy clustering to identify the mixture of urban functions.We conducted a case study using taxiflow and social media data in Beijing,China,in whichfive urban functions and their correlations with land use were recognized.The mixture degree of urban functions in each location was revealed,which had a negative correlation with taxi trip distance.The results confirmed the advantages of our method in understanding mixed urban functions by employing various representation learning methods to comprehensively depict human activities.This study has important implications for urban planners in understanding urban systems and developing better strategies.展开更多
Early studies on discourse rhetorical structure parsing mainly adopt bottom-up approaches,limiting the parsing process to local information.Although current top-down parsers can better capture global information and h...Early studies on discourse rhetorical structure parsing mainly adopt bottom-up approaches,limiting the parsing process to local information.Although current top-down parsers can better capture global information and have achieved particular success,the importance of local and global information at various levels of discourse parsing is differ-ent.This paper argues that combining local and global information for discourse parsing is more sensible.To prove this,we introduce a top-down discourse parser with bidirectional representation learning capabilities.Existing corpora on Rhetorical Structure Theory(RST)are known to be much limited in size,which makes discourse parsing very challenging.To alleviate this problem,we leverage some boundary features and a data augmentation strategy to tap the potential of our parser.We use two methods for evaluation,and the experiments on the RST-DT corpus show that our parser can pri-marily improve the performance due to the effective combination of local and global information.The boundary features and the data augmentation strategy also play a role.Based on gold standard elementary discourse units(EDUs),our pars-er significantly advances the baseline systems in nuclearity detection,with the results on the other three indicators(span,relation,and full)being competitive.Based on automatically segmented EDUs,our parser still outperforms previous state-of-the-artwork.展开更多
Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition sys...Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.展开更多
Neural network based deep learning methods aim to learn representations of data and have produced state-of-the-art results in many natural language processing(NLP)tasks.Discourse parsing is an important research topic...Neural network based deep learning methods aim to learn representations of data and have produced state-of-the-art results in many natural language processing(NLP)tasks.Discourse parsing is an important research topic in discourse analysis,aiming to infer the discourse structure and model the coherence of a given text.This survey covers text-level discourse parsing,shallow discourse parsing and coherence assessment.We first introduce the basic concepts and traditional approaches,and then focus on recent advances in discourse structure oriented representation learning.We also introduce a trend of discourse structure aware representation learning that is to exploit discourse structures or discourse objectives for learning representations of sentences and documents for specific applications or for general purpose.Finally,we present a brief summary of the progress and discuss several future directions.展开更多
Visual representation learning is ubiquitous in various real-world applications,including visual comprehension,video understanding,multi-modal analysis,human-computer interaction,and urban computing.Due to the emergen...Visual representation learning is ubiquitous in various real-world applications,including visual comprehension,video understanding,multi-modal analysis,human-computer interaction,and urban computing.Due to the emergence of huge amounts of multimodal heterogeneous spatial/temporal/spatial-temporal data in the big data era,the lack of interpretability,robustness,and out-of-distribution generalization are becoming the challenges of the existing visual models.The majority of the existing methods tend to fit the original data/variable distributions and ignore the essential causal relations behind the multi-modal knowledge,which lacks unified guidance and analysis about why modern visual representation learning methods easily collapse into data bias and have limited generalization and cognitive abilities.Inspired by the strong inference ability of human-level agents,recent years have therefore witnessed great effort in developing causal reasoning paradigms to realize robust representation and model learning with good cognitive ability.In this paper,we conduct a comprehensive review of existing causal reasoning methods for visual representation learning,covering fundamental theories,models,and datasets.The limitations of current methods and datasets are also discussed.Moreover,we propose some prospective challenges,opportunities,and future research directions for benchmarking causal reasoning algorithms in visual representation learning.This paper aims to provide a comprehensive overview of this emerging field,attract attention,encourage discussions,bring to the forefront the urgency of developing novel causal reasoning methods,publicly available benchmarks,and consensus-building standards for reliable visual representation learning and related real-world applications more efficiently.展开更多
Learning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning th...Learning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision is strongly preferred for its soaring performance on visual representation learning. This paper introduces a contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability.Specifically, we propose to optimize a contrastive learning task and a physical property prediction task simultaneously. Given the synthetic scene, the first task aims to maximize agreement between a pair of synthetic images generated by our proposed view sampling module, while the second task aims to predict three physical property maps, i.e., depth, instance contour maps, and surface normal maps. In addition, a feature-level domain adaptation technique with adversarial training is applied to reduce the domain difference between the realistic and the synthetic data. Experiments demonstrate that our proposed method achieves state-of-the-art performance on several visual recognition datasets.展开更多
基金supported by the Natural Science Foundation of Zhejiang Province(LY19A020001).
文摘With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the route network design problem,the expressive capability and search performance of the algorithm on multi-objective problems remain unexplored.In this paper,the wind farm layout optimization problem is defined.Then,a multi-objective algorithm based on Graph Neural Network(GNN)and Variable Neighborhood Search(VNS)algorithm is proposed.GNN provides the basis representations for the following search algorithm so that the expressiveness and search accuracy of the algorithm can be improved.The multi-objective VNS algorithm is put forward by combining it with the multi-objective optimization algorithm to solve the problem with multiple objectives.The proposed algorithm is applied to the 18-node simulation example to evaluate the feasibility and practicality of the developed optimization strategy.The experiment on the simulation example shows that the proposed algorithm yields a reduction of 6.1% in Point of Common Coupling(PCC)over the current state-of-the-art algorithm,which means that the proposed algorithm designs a layout that improves the quality of the power supply by 6.1%at the same cost.The ablation experiments show that the proposed algorithm improves the power quality by more than 8.6% and 7.8% compared to both the original VNS algorithm and the multi-objective VNS algorithm.
基金This research is partially supported by the National Natural Science Foundation of China(Grant No.61772098)Chongqing Technology Innovation and Application Development Project(Grant No.cstc2020jscxmsxmX0150)+2 种基金Chongqing Science and Technology Innovation Leading Talent Support Program(CSTCCXLJRC201908)Basic and Advanced Research Projects of CSTC(No.cstc2019jcyj-zdxmX0008)Science and Technology Research Program of Chongqing Municipal Education Commission(Grant No.KJZD-K201900605).
文摘The traditional malware research is mainly based on its recognition and detection as a breakthrough point,without focusing on its propagation trends or predicting the subsequently infected nodes.The complexity of network structure,diversity of network nodes,and sparsity of data all pose difficulties in predicting propagation.This paper proposes a malware propagation prediction model based on representation learning and Graph Convolutional Networks(GCN)to address the aforementioned problems.First,to solve the problem of the inaccuracy of infection intensity calculation caused by the sparsity of node interaction behavior data in the malware propagation network,a mechanism based on a tensor to mine the infection intensity among nodes is proposed to retain the network structure information.The influence of the relationship between nodes on the infection intensity is also analyzed.Second,given the diversity and complexity of the content and structure of infected and normal nodes in the network,considering the advantages of representation learning in data feature extraction,the corresponding representation learning method is adopted for the characteristics of infection intensity among nodes.This can efficiently calculate the relationship between entities and relationships in low dimensional space to achieve the goal of low dimensional,dense,and real-valued representation learning for the characteristics of propagation spatial data.We also design a new method,Tensor2vec,to learn the potential structural features of malware propagation.Finally,considering the convolution ability of GCN for non-Euclidean data,we propose a dynamic prediction model of malware propagation based on representation learning and GCN to solve the time effectiveness problem of the malware propagation carrier.The experimental results show that the proposed model can effectively predict the behaviors of the nodes in the network and discover the influence of different characteristics of nodes on the malware propagation situation.
基金Projects(11661069,61763041) supported by the National Natural Science Foundation of ChinaProject(IRT_15R40) supported by Changjiang Scholars and Innovative Research Team in University,ChinaProject(2017TS045) supported by the Fundamental Research Funds for the Central Universities,China
文摘Most existing network representation learning algorithms focus on network structures for learning.However,network structure is only one kind of view and feature for various networks,and it cannot fully reflect all characteristics of networks.In fact,network vertices usually contain rich text information,which can be well utilized to learn text-enhanced network representations.Meanwhile,Matrix-Forest Index(MFI)has shown its high effectiveness and stability in link prediction tasks compared with other algorithms of link prediction.Both MFI and Inductive Matrix Completion(IMC)are not well applied with algorithmic frameworks of typical representation learning methods.Therefore,we proposed a novel semi-supervised algorithm,tri-party deep network representation learning using inductive matrix completion(TDNR).Based on inductive matrix completion algorithm,TDNR incorporates text features,the link certainty degrees of existing edges and the future link probabilities of non-existing edges into network representations.The experimental results demonstrated that TFNR outperforms other baselines on three real-world datasets.The visualizations of TDNR show that proposed algorithm is more discriminative than other unsupervised approaches.
基金Supported by the National Natural Science Foundation of China(No.61303179,U1135005,61175020)
文摘A local and global context representation learning model for Chinese characters is designed and a Chinese word segmentation method based on character representations is proposed in this paper. First, the proposed Chinese character learning model uses the semanties of loeal context and global context to learn the representation of Chinese characters. Then, Chinese word segmentation model is built by a neural network, while the segmentation model is trained with the eharaeter representations as its input features. Finally, experimental results show that Chinese charaeter representations can effectively learn the semantic information. Characters with similar semantics cluster together in the visualize space. Moreover, the proposed Chinese word segmentation model also achieves a pretty good improvement on precision, recall and f-measure.
基金supported by the Natural Science Foundation of Tianjin(No.20JCQNJC00720)the Fundamental Research Fund for the Central Universities(No.3122021052)。
文摘The homogeneity analysis of multi-airport system can provide important decision-making support for the route layout and cooperative operation.Existing research seldom analyzes the homogeneity of multi-airport system from the perspective of route network analysis,and the attribute information of airport nodes in the airport route network is not appropriately integrated into the airport network.In order to solve this problem,a multi-airport system homogeneity analysis method based on airport attribute network representation learning is proposed.Firstly,the route network of a multi-airport system with attribute information is constructed.If there are flights between airports,an edge is added between airports,and regional attribute information is added for each airport node.Secondly,the airport attributes and the airport network vector are represented respectively.The airport attributes and the airport network vector are embedded into the unified airport representation vector space by the network representation learning method,and then the airport vector integrating the airport attributes and the airport network characteristics is obtained.By calculating the similarity of the airport vectors,it is convenient to calculate the degree of homogeneity between airports and the homogeneity of the multi-airport system.The experimental results on the Beijing-Tianjin-Hebei multi-airport system show that,compared with other existing algorithms,the homogeneity analysis method based on attributed network representation learning can get more consistent results with the current situation of Beijing-Tianjin-Hebei multi-airport system.
基金The National Natural Science Foundation of China(No.61762031)The Science and Technology Major Project of Guangxi Province(NO.AA19046004)The Natural Science Foundation of Guangxi(No.2021JJA170130).
文摘There is a large amount of information in the network data that we canexploit. It is difficult for classical community detection algorithms to handle network data with sparse topology. Representation learning of network data is usually paired with clustering algorithms to solve the community detection problem.Meanwhile, there is always an unpredictable distribution of class clusters outputby graph representation learning. Therefore, we propose an improved densitypeak clustering algorithm (ILDPC) for the community detection problem, whichimproves the local density mechanism in the original algorithm and can betteraccommodate class clusters of different shapes. And we study the communitydetection in network data. The algorithm is paired with the benchmark modelGraph sample and aggregate (GraphSAGE) to show the adaptability of ILDPCfor community detection. The plotted decision diagram shows that the ILDPCalgorithm is more discriminative in selecting density peak points compared tothe original algorithm. Finally, the performance of K-means and other clusteringalgorithms on this benchmark model is compared, and the algorithm is proved tobe more suitable for community detection in sparse networks with the benchmarkmodel on the evaluation criterion F1-score. The sensitivity of the parameters ofthe ILDPC algorithm to the low-dimensional vector set output by the benchmarkmodel GraphSAGE is also analyzed.
基金National Natural Science Foundation of China(No.61972080)Shanghai Rising-Star Program,China(No.19QA1400300)。
文摘With the wide application of location-based social networks(LBSNs),personalized point of interest(POI)recommendation becomes popular,especially in the commercial field.Unfortunately,it is challenging to accurately recommend POIs to users because the user-POI matrix is extremely sparse.In addition,a user's check-in activities are affected by many influential factors.However,most of existing studies capture only few influential factors.It is hard for them to be extended to incorporate other heterogeneous information in a unified way.To address these problems,we propose a meta-path-based deep representation learning(MPDRL)model for personalized POI recommendation.In this model,we design eight types of meta-paths to fully utilize the rich heterogeneous information in LBSNs for the representations of users and POIs,and deeply mine the correlations between users and POIs.To further improve the recommendation performance,we design an attention-based long short-term memory(LSTM)network to learn the importance of different influential factors on a user's specific check-in activity.To verify the effectiveness of our proposed method,we conduct extensive experiments on a real-world dataset,Foursquare.Experimental results show that the MPDRL model improves at least 16.97%and 23.55%over all comparison methods in terms of the metric Precision@N(Pre@N)and Recall@N(Rec@N)respectively.
基金New-Generation Artificial Intelligence-Major Program in the Sci-Tech Innovation 2030 Agenda from the Ministry of Science and Technology of China(2018AAA0102100)Hunan Provincial Department of Education key project(21A0250)The First Class Discipline Open Fund of Hunan University of Traditional Chinese Medicine(2022ZYX08)。
文摘Objective To construct symptom-formula-herb heterogeneous graphs structured Treatise on Febrile Diseases(Shang Han Lun,《伤寒论》)dataset and explore an optimal learning method represented with node attributes based on graph convolutional network(GCN).Methods Clauses that contain symptoms,formulas,and herbs were abstracted from Treatise on Febrile Diseases to construct symptom-formula-herb heterogeneous graphs,which were used to propose a node representation learning method based on GCN−the Traditional Chinese Medicine Graph Convolution Network(TCM-GCN).The symptom-formula,symptom-herb,and formula-herb heterogeneous graphs were processed with the TCM-GCN to realize high-order propagating message passing and neighbor aggregation to obtain new node representation attributes,and thus acquiring the nodes’sum-aggregations of symptoms,formulas,and herbs to lay a foundation for the downstream tasks of the prediction models.Results Comparisons among the node representations with multi-hot encoding,non-fusion encoding,and fusion encoding showed that the Precision@10,Recall@10,and F1-score@10 of the fusion encoding were 9.77%,6.65%,and 8.30%,respectively,higher than those of the non-fusion encoding in the prediction studies of the model.Conclusion Node representations by fusion encoding achieved comparatively ideal results,indicating the TCM-GCN is effective in realizing node-level representations of heterogeneous graph structured Treatise on Febrile Diseases dataset and is able to elevate the performance of the downstream tasks of the diagnosis model.
基金This work was supported by the Research Deanship of Prince Sattam Bin Abdulaziz University,Al-Kharj,Saudi Arabia(Grant No.2020/01/17215).Also,the author thanks Deanship of college of computer engineering and sciences for technical support provided to complete the project successfully。
文摘In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owing to the lack of accurately labeled network traffic data,many unsupervised feature representation learning models have been proposed with state-of-theart performance.Yet,these models fail to consider the classification error while learning the feature representation.Intuitively,the learnt feature representation may degrade the performance of the classification task.For the first time in the field of intrusion detection,this paper proposes an unsupervised IDS model leveraging the benefits of deep autoencoder(DAE)for learning the robust feature representation and one-class support vector machine(OCSVM)for finding the more compact decision hyperplane for intrusion detection.Specially,the proposed model defines a new unified objective function to minimize the reconstruction and classification error simultaneously.This unique contribution not only enables the model to support joint learning for feature representation and classifier training but also guides to learn the robust feature representation which can improve the discrimination ability of the classifier for intrusion detection.Three set of evaluation experiments are conducted to demonstrate the potential of the proposed model.First,the ablation evaluation on benchmark dataset,NSL-KDD validates the design decision of the proposed model.Next,the performance evaluation on recent intrusion dataset,UNSW-NB15 signifies the stable performance of the proposed model.Finally,the comparative evaluation verifies the efficacy of the proposed model against recently published state-of-the-art methods.
基金Project supported by the National Major Science and Technology Projects of China(No.2022YFB3303302)the National Natural Science Foundation of China(Nos.61977012 and 62207007)the Central Universities Project in China at Chongqing University(Nos.2021CDJYGRH011 and 2020CDJSK06PT14)。
文摘Artificial intelligence generated content(AIGC)has emerged as an indispensable tool for producing large-scale content in various forms,such as images,thanks to the significant role that AI plays in imitation and production.However,interpretability and controllability remain challenges.Existing AI methods often face challenges in producing images that are both flexible and controllable while considering causal relationships within the images.To address this issue,we have developed a novel method for causal controllable image generation(CCIG)that combines causal representation learning with bi-directional generative adversarial networks(GANs).This approach enables humans to control image attributes while considering the rationality and interpretability of the generated images and also allows for the generation of counterfactual images.The key of our approach,CCIG,lies in the use of a causal structure learning module to learn the causal relationships between image attributes and joint optimization with the encoder,generator,and joint discriminator in the image generation module.By doing so,we can learn causal representations in image’s latent space and use causal intervention operations to control image generation.We conduct extensive experiments on a real-world dataset,CelebA.The experimental results illustrate the effectiveness of CCIG.
基金supported by the National Key Research and Development Program(No.2019YFE0105300)the National Natural Science Foundation of China(No.62103143)+2 种基金the Hunan Province Key Research and Development Program(No.2022WK2006)the Special Project for the Construction of Innovative Provinces in Hunan(Nos.2020TP2018 and 2019GK4030)the Scientific Research Fund of Hunan Provincial Education Department(No.22B0471).
文摘Metapaths with specific complex semantics are critical to learning diverse semantic and structural information of heterogeneous networks(HNs)for most of the existing representation learning models.However,any metapaths consisting of multiple,simple metarelations must be driven by domain experts.These sensitive,expensive,and limited metapaths severely reduce the flexibility and scalability of the existing models.A metapath-free,scalable representation learning model,called Metarelation2vec,is proposed for HNs with biased joint learning of all metarelations in a bid to address this problem.Specifically,a metarelation-aware,biased walk strategy is first designed to obtain better training samples by using autogenerating cooperation probabilities for all metarelations rather than using expert-given metapaths.Thereafter,grouped nodes by the type,a common and shallow skip-gram model is used to separately learn structural proximity for each node type.Next,grouped links by the type,a novel and shallow model is used to separately learn the semantic proximity for each link type.Finally,supervised by the cooperation probabilities of all meta-words,the biased training samples are thrown into the shallow models to jointly learn the structural and semantic information in the HNs,ensuring the accuracy and scalability of the models.Extensive experimental results on three tasks and four open datasets demonstrate the advantages of our proposed model.
基金National Natural Science Foundation of China(Grant Nos.61906060,62076217,and 62120106008)National Key R&D Program of China(No.2016YFC0801406)Natural Science Foundation of the Jiangsu Higher Education Institutions(No.20KJB520007).
文摘Nowadays,the personalized recommendation has become a research hotspot for addressing information overload.Despite this,generating effective recommendations from sparse data remains a challenge.Recently,auxiliary information has been widely used to address data sparsity,but most models using auxiliary information are linear and have limited expressiveness.Due to the advantages of feature extraction and no-label requirements,autoencoder-based methods have become quite popular.However,most existing autoencoder-based methods discard the reconstruction of auxiliary information,which poses huge challenges for better representation learning and model scalability.To address these problems,we propose Serial-Autoencoder for Personalized Recommendation(SAPR),which aims to reduce the loss of critical information and enhance the learning of feature representations.Specifically,we first combine the original rating matrix and item attribute features and feed them into the first autoencoder for generating a higher-level representation of the input.Second,we use a second autoencoder to enhance the reconstruction of the data representation of the prediciton rating matrix.The output rating information is used for recommendation prediction.Extensive experiments on the MovieTweetings and MovieLens datasets have verified the effectiveness of SAPR compared to state-of-the-art models.
基金supported by the National Natural Science Foundation of China(Grant Nos.61906060,62076217,62120106008)the Yangzhou University Interdisciplinary Research Foundation for Animal Husbandry Discipline of Targeted Support(yzuxk202015)+1 种基金the Opening Foundation of Key Laboratory of Huizhou Architecture in Anhui Province(HPJZ-2020-02)the Open Project Program of Joint International Research Laboratory of Agriculture and AgriProduct Safety(JILAR-KF202104).
文摘The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain.The key bottleneck in unsupervised domain adaptation is how to obtain higher-level and more abstract feature representations between source and target domains which can bridge the chasm of domain discrepancy.Recently,deep learning methods based on autoencoder have achieved sound performance in representation learning,and many dual or serial autoencoderbased methods take different characteristics of data into consideration for improving the effectiveness of unsupervised domain adaptation.However,most existing methods of autoencoders just serially connect the features generated by different autoencoders,which pose challenges for the discriminative representation learning and fail to find the real cross-domain features.To address this problem,we propose a novel representation learning method based on an integrated autoencoders for unsupervised domain adaptation,called IAUDA.To capture the inter-and inner-domain features of the raw data,two different autoencoders,which are the marginalized autoencoder with maximum mean discrepancy(mAE)and convolutional autoencoder(CAE)respectively,are proposed to learn different feature representations.After higher-level features are obtained by these two different autoencoders,a sparse autoencoder is introduced to compact these inter-and inner-domain representations.In addition,a whitening layer is embedded for features processed before the mAE to reduce redundant features inside a local area.Experimental results demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.
基金supported by the National Key Research&Development Project of China(Nos.2021ZD0110700 and 2021ZD0110400)the National Natural Science Foundation of China(Nos.U20A20387,U19B2043,61976185,and U19B2042)+2 种基金the Zhejiang Natural Science Foundation,China(No.LR19F020002)the Zhejiang Innovation Foundation,China(No.2019R52002)the Fundamental Research Funds for the Central Universities,China。
文摘To leverage the enormous amount of unlabeled data on distributed edge devices,we formulate a new problem in federated learning called federated unsupervised representation learning(FURL)to learn a common representation model without supervision while preserving data privacy.FURL poses two new challenges:(1)data distribution shift(non-independent and identically distributed,non-IID)among clients would make local models focus on different categories,leading to the inconsistency of representation spaces;(2)without unified information among the clients in FURL,the representations across clients would be misaligned.To address these challenges,we propose the federated contrastive averaging with dictionary and alignment(FedCA)algorithm.FedCA is composed of two key modules:a dictionary module to aggregate the representations of samples from each client which can be shared with all clients for consistency of representation space and an alignment module to align the representation of each client on a base model trained on public data.We adopt the contrastive approach for local model training.Through extensive experiments with three evaluation protocols in IID and non-IID settings,we demonstrate that FedCA outperforms all baselines with significant margins.
基金supported by the National Natural Science Foundation of China[grant number 41971331].
文摘When various urban functions are integrated into one location,they form a mixture of functions.The emerging big data promote an alternative way to identify mixed functions.However,current methods are largely unable to extract deep features in these data,resulting in low accuracy.In this study,we focused on recognizing mixed urban functions from the perspective of human activities,which are essential indicators of functional areas in a city.We proposed a framework to comprehensively extract deep features of human activities in big data,including activity dynamics,mobility interactions,and activity semantics,through representation learning methods.Then,integrating these features,we employed fuzzy clustering to identify the mixture of urban functions.We conducted a case study using taxiflow and social media data in Beijing,China,in whichfive urban functions and their correlations with land use were recognized.The mixture degree of urban functions in each location was revealed,which had a negative correlation with taxi trip distance.The results confirmed the advantages of our method in understanding mixed urban functions by employing various representation learning methods to comprehensively depict human activities.This study has important implications for urban planners in understanding urban systems and developing better strategies.
基金supported by the National Natural Science Foundation of China under Grant No.62276178。
文摘Early studies on discourse rhetorical structure parsing mainly adopt bottom-up approaches,limiting the parsing process to local information.Although current top-down parsers can better capture global information and have achieved particular success,the importance of local and global information at various levels of discourse parsing is differ-ent.This paper argues that combining local and global information for discourse parsing is more sensible.To prove this,we introduce a top-down discourse parser with bidirectional representation learning capabilities.Existing corpora on Rhetorical Structure Theory(RST)are known to be much limited in size,which makes discourse parsing very challenging.To alleviate this problem,we leverage some boundary features and a data augmentation strategy to tap the potential of our parser.We use two methods for evaluation,and the experiments on the RST-DT corpus show that our parser can pri-marily improve the performance due to the effective combination of local and global information.The boundary features and the data augmentation strategy also play a role.Based on gold standard elementary discourse units(EDUs),our pars-er significantly advances the baseline systems in nuclearity detection,with the results on the other three indicators(span,relation,and full)being competitive.Based on automatically segmented EDUs,our parser still outperforms previous state-of-the-artwork.
基金Supported by the Centre for Digital Entertainment at Bournemouth University by the UK Engineering and Physical Sciences Research Council(EPSRC)EP/L016540/1 and Humain Ltd.
文摘Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.
基金the National Natural Science Foundation of China(Grant Nos.61876113 and 61876112)the Beijing Natural Science Foundation(Grant No.4192017)+1 种基金the Support Project of High-level Teachers in Beijing Municipal Universities in the Period of 13th Five-year Plan(Grant No.CIT&TCD20170322)the Capacity Building for Sci-Tech Innovation-Fundamental Scientific Research Funds。
文摘Neural network based deep learning methods aim to learn representations of data and have produced state-of-the-art results in many natural language processing(NLP)tasks.Discourse parsing is an important research topic in discourse analysis,aiming to infer the discourse structure and model the coherence of a given text.This survey covers text-level discourse parsing,shallow discourse parsing and coherence assessment.We first introduce the basic concepts and traditional approaches,and then focus on recent advances in discourse structure oriented representation learning.We also introduce a trend of discourse structure aware representation learning that is to exploit discourse structures or discourse objectives for learning representations of sentences and documents for specific applications or for general purpose.Finally,we present a brief summary of the progress and discuss several future directions.
基金supported in part by National Natural Science Foundation of China(Nos.62002395,61976250 and U1811463)the National Key R&D Program of China(No.2021ZD0111601)the Guangdong Basic and Applied Basic Research Foundation,China(Nos.2021A15150123 and 2020B1515020048).
文摘Visual representation learning is ubiquitous in various real-world applications,including visual comprehension,video understanding,multi-modal analysis,human-computer interaction,and urban computing.Due to the emergence of huge amounts of multimodal heterogeneous spatial/temporal/spatial-temporal data in the big data era,the lack of interpretability,robustness,and out-of-distribution generalization are becoming the challenges of the existing visual models.The majority of the existing methods tend to fit the original data/variable distributions and ignore the essential causal relations behind the multi-modal knowledge,which lacks unified guidance and analysis about why modern visual representation learning methods easily collapse into data bias and have limited generalization and cognitive abilities.Inspired by the strong inference ability of human-level agents,recent years have therefore witnessed great effort in developing causal reasoning paradigms to realize robust representation and model learning with good cognitive ability.In this paper,we conduct a comprehensive review of existing causal reasoning methods for visual representation learning,covering fundamental theories,models,and datasets.The limitations of current methods and datasets are also discussed.Moreover,we propose some prospective challenges,opportunities,and future research directions for benchmarking causal reasoning algorithms in visual representation learning.This paper aims to provide a comprehensive overview of this emerging field,attract attention,encourage discussions,bring to the forefront the urgency of developing novel causal reasoning methods,publicly available benchmarks,and consensus-building standards for reliable visual representation learning and related real-world applications more efficiently.
基金by National Natural Science Foundation of China(Nos.61822204 and 61521002).
文摘Learning discriminative representations with deep neural networks often relies on massive labeled data, which is expensive and difficult to obtain in many real scenarios. As an alternative, self-supervised learning that leverages input itself as supervision is strongly preferred for its soaring performance on visual representation learning. This paper introduces a contrastive self-supervised framework for learning generalizable representations on the synthetic data that can be obtained easily with complete controllability.Specifically, we propose to optimize a contrastive learning task and a physical property prediction task simultaneously. Given the synthetic scene, the first task aims to maximize agreement between a pair of synthetic images generated by our proposed view sampling module, while the second task aims to predict three physical property maps, i.e., depth, instance contour maps, and surface normal maps. In addition, a feature-level domain adaptation technique with adversarial training is applied to reduce the domain difference between the realistic and the synthetic data. Experiments demonstrate that our proposed method achieves state-of-the-art performance on several visual recognition datasets.