In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owin...In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owing to the lack of accurately labeled network traffic data,many unsupervised feature representation learning models have been proposed with state-of-theart performance.Yet,these models fail to consider the classification error while learning the feature representation.Intuitively,the learnt feature representation may degrade the performance of the classification task.For the first time in the field of intrusion detection,this paper proposes an unsupervised IDS model leveraging the benefits of deep autoencoder(DAE)for learning the robust feature representation and one-class support vector machine(OCSVM)for finding the more compact decision hyperplane for intrusion detection.Specially,the proposed model defines a new unified objective function to minimize the reconstruction and classification error simultaneously.This unique contribution not only enables the model to support joint learning for feature representation and classifier training but also guides to learn the robust feature representation which can improve the discrimination ability of the classifier for intrusion detection.Three set of evaluation experiments are conducted to demonstrate the potential of the proposed model.First,the ablation evaluation on benchmark dataset,NSL-KDD validates the design decision of the proposed model.Next,the performance evaluation on recent intrusion dataset,UNSW-NB15 signifies the stable performance of the proposed model.Finally,the comparative evaluation verifies the efficacy of the proposed model against recently published state-of-the-art methods.展开更多
对文本中诸如实体与关系、事件及其论元等要素及其特定关系的联合抽取是自然语言处理的一项关键任务.现有研究大多采用统一编码或参数共享的方式隐性处理任务间的交互,缺乏对任务之间特定关系的显式建模,从而限制模型充分利用任务间的...对文本中诸如实体与关系、事件及其论元等要素及其特定关系的联合抽取是自然语言处理的一项关键任务.现有研究大多采用统一编码或参数共享的方式隐性处理任务间的交互,缺乏对任务之间特定关系的显式建模,从而限制模型充分利用任务间的关联信息并影响任务间的有效协同.为此,提出了一种基于任务协作表示增强的要素及关系联合抽取模型(Task-Collaboration Representation Enhanced model for joint extraction of elements and relationships,TCRE).该模型旨在从多个阶段处理任务间的特定关系,帮助子任务进行更细致的调节和优化,促进整体性能的提升.在三个关系抽取和一个事件抽取数据集上进行实验,TCRE在实体识别和关系提取任务上平均性能分别提高0.57%和0.77%,在触发词识别和论元角色分类任务上分别提高0.7%和1.4%.此外,TCRE还显示出在缓解“跷跷板现象”方面的作用.展开更多
In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity res...In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity resolution for image and text integrating global and fine-grained joint attention mechanism method.First,we map the cross-modal data to a common embedding space utilizing a feature extraction network.Then,we integrate global joint attention mechanism and fine-grained joint attention mechanism,making the model have the ability to learn the global semantic characteristics and the local fine-grained semantic characteristics of the cross-modal data,which is used to fully exploit the cross-modal semantic correlation and boost the performance of cross-modal entity resolution.Moreover,experiments on Flickr-30K and MS-COCO datasets show that the overall performance of R@sum outperforms by 4.30%and 4.54%compared with 5 state-of-the-art methods,respectively,which can fully demonstrate the superiority of our proposed method.展开更多
Network representation learning called NRL for short aims at embedding various networks into low-dimensional continuous distributed vector spaces.Most existing representation learning methods focus on learning represe...Network representation learning called NRL for short aims at embedding various networks into low-dimensional continuous distributed vector spaces.Most existing representation learning methods focus on learning representations purely based on the network topology.i.e.,the linkage relationships between network nodes,but the nodes in lots of networks may contain rich text features,which are beneficial to network analysis tasks,such as node classification,link prediction and so on.In this paper,we propose a novel network representation learning model,which is named as Text-Enhanced Network Representation Learning called TENR for short,by introducing text features of the nodesto learn more discriminative network representations,which come from joint learning of both the network topology and text features,and include common influencing factors of both parties.In the experiments,we evaluate our proposed method and other baseline methods on the task of node classihication.The experimental results demonstrate that our method outperforms other baseline methods on three real-world datasets.展开更多
该文基于多通道脑电信号时空特性构建非正交变换过完备字典,准确稀疏表示蕴含时空相关性信息的多通道脑电信号,提高基于时空稀疏贝叶斯学习模型的多通道脑电信号压缩感知联合重构算法性能。实验选用eegmmidb脑电数据库的多通道脑电信号...该文基于多通道脑电信号时空特性构建非正交变换过完备字典,准确稀疏表示蕴含时空相关性信息的多通道脑电信号,提高基于时空稀疏贝叶斯学习模型的多通道脑电信号压缩感知联合重构算法性能。实验选用eegmmidb脑电数据库的多通道脑电信号验证所提算法有效性。结果表明,基于过完备字典稀疏表示的多通道脑电信号,能够为多通道脑电信号压缩感知重构算法提供更多的时空相关性信息,比传统多通道脑电信号压缩感知重构算法所得的信噪比值提高近12 d B,重构时间减少0.75 s,显著提高多通道脑电信号联合重构性能。展开更多
针对现有图嵌入方法损失函数来源单一导致节点表示不能被充分优化的问题,提出了基于同步联合优化的注意力图自编码器(attentional graph auto-encoder based on synchronous joint optimization,AGE-SJO)。设计基于注意力机制的编码器...针对现有图嵌入方法损失函数来源单一导致节点表示不能被充分优化的问题,提出了基于同步联合优化的注意力图自编码器(attentional graph auto-encoder based on synchronous joint optimization,AGE-SJO)。设计基于注意力机制的编码器学习节点表示,并利用内积解码器重建图结构生成重建损失(L_(R));为从多方面优化表示,将编码器和多层感知机分别作为生成模型和判别模型进行对抗训练,获得生成损失(L_(G))和判别损失(L_(D));提出同步联合优化策略,依次在L_(R)的k步、L_(D)的k步和L_(G)的1步之间优化表示,并将其应用于链路预测和节点聚类。在引文数据集上的实验结果表明,所提出的AGE-SJO性能优越,与最强基线相比,AUC、AP、ACC、NMI和ARI指标可分别提升1.6%、2.1%、10.6%、4.9%和12.4%。展开更多
基金This work was supported by the Research Deanship of Prince Sattam Bin Abdulaziz University,Al-Kharj,Saudi Arabia(Grant No.2020/01/17215).Also,the author thanks Deanship of college of computer engineering and sciences for technical support provided to complete the project successfully。
文摘In the era of Big data,learning discriminant feature representation from network traffic is identified has as an invariably essential task for improving the detection ability of an intrusion detection system(IDS).Owing to the lack of accurately labeled network traffic data,many unsupervised feature representation learning models have been proposed with state-of-theart performance.Yet,these models fail to consider the classification error while learning the feature representation.Intuitively,the learnt feature representation may degrade the performance of the classification task.For the first time in the field of intrusion detection,this paper proposes an unsupervised IDS model leveraging the benefits of deep autoencoder(DAE)for learning the robust feature representation and one-class support vector machine(OCSVM)for finding the more compact decision hyperplane for intrusion detection.Specially,the proposed model defines a new unified objective function to minimize the reconstruction and classification error simultaneously.This unique contribution not only enables the model to support joint learning for feature representation and classifier training but also guides to learn the robust feature representation which can improve the discrimination ability of the classifier for intrusion detection.Three set of evaluation experiments are conducted to demonstrate the potential of the proposed model.First,the ablation evaluation on benchmark dataset,NSL-KDD validates the design decision of the proposed model.Next,the performance evaluation on recent intrusion dataset,UNSW-NB15 signifies the stable performance of the proposed model.Finally,the comparative evaluation verifies the efficacy of the proposed model against recently published state-of-the-art methods.
文摘对文本中诸如实体与关系、事件及其论元等要素及其特定关系的联合抽取是自然语言处理的一项关键任务.现有研究大多采用统一编码或参数共享的方式隐性处理任务间的交互,缺乏对任务之间特定关系的显式建模,从而限制模型充分利用任务间的关联信息并影响任务间的有效协同.为此,提出了一种基于任务协作表示增强的要素及关系联合抽取模型(Task-Collaboration Representation Enhanced model for joint extraction of elements and relationships,TCRE).该模型旨在从多个阶段处理任务间的特定关系,帮助子任务进行更细致的调节和优化,促进整体性能的提升.在三个关系抽取和一个事件抽取数据集上进行实验,TCRE在实体识别和关系提取任务上平均性能分别提高0.57%和0.77%,在触发词识别和论元角色分类任务上分别提高0.7%和1.4%.此外,TCRE还显示出在缓解“跷跷板现象”方面的作用.
基金the Special Research Fund for the China Postdoctoral Science Foundation(No.2015M582832)the Major National Science and Technology Program(No.2015ZX01040201)the National Natural Science Foundation of China(No.61371196)。
文摘In order to solve the problem that the existing cross-modal entity resolution methods easily ignore the high-level semantic informational correlations between cross-modal data,we propose a novel cross-modal entity resolution for image and text integrating global and fine-grained joint attention mechanism method.First,we map the cross-modal data to a common embedding space utilizing a feature extraction network.Then,we integrate global joint attention mechanism and fine-grained joint attention mechanism,making the model have the ability to learn the global semantic characteristics and the local fine-grained semantic characteristics of the cross-modal data,which is used to fully exploit the cross-modal semantic correlation and boost the performance of cross-modal entity resolution.Moreover,experiments on Flickr-30K and MS-COCO datasets show that the overall performance of R@sum outperforms by 4.30%and 4.54%compared with 5 state-of-the-art methods,respectively,which can fully demonstrate the superiority of our proposed method.
基金supported by the National Natural Sci-ence Foundation of China(Grant Nos.11661069 and 61763041)the Pro-gram for Changjiang Scholars and Innovative Research Team in Universities(IRT_15R40).
文摘Network representation learning called NRL for short aims at embedding various networks into low-dimensional continuous distributed vector spaces.Most existing representation learning methods focus on learning representations purely based on the network topology.i.e.,the linkage relationships between network nodes,but the nodes in lots of networks may contain rich text features,which are beneficial to network analysis tasks,such as node classification,link prediction and so on.In this paper,we propose a novel network representation learning model,which is named as Text-Enhanced Network Representation Learning called TENR for short,by introducing text features of the nodesto learn more discriminative network representations,which come from joint learning of both the network topology and text features,and include common influencing factors of both parties.In the experiments,we evaluate our proposed method and other baseline methods on the task of node classihication.The experimental results demonstrate that our method outperforms other baseline methods on three real-world datasets.
文摘该文基于多通道脑电信号时空特性构建非正交变换过完备字典,准确稀疏表示蕴含时空相关性信息的多通道脑电信号,提高基于时空稀疏贝叶斯学习模型的多通道脑电信号压缩感知联合重构算法性能。实验选用eegmmidb脑电数据库的多通道脑电信号验证所提算法有效性。结果表明,基于过完备字典稀疏表示的多通道脑电信号,能够为多通道脑电信号压缩感知重构算法提供更多的时空相关性信息,比传统多通道脑电信号压缩感知重构算法所得的信噪比值提高近12 d B,重构时间减少0.75 s,显著提高多通道脑电信号联合重构性能。
文摘针对现有图嵌入方法损失函数来源单一导致节点表示不能被充分优化的问题,提出了基于同步联合优化的注意力图自编码器(attentional graph auto-encoder based on synchronous joint optimization,AGE-SJO)。设计基于注意力机制的编码器学习节点表示,并利用内积解码器重建图结构生成重建损失(L_(R));为从多方面优化表示,将编码器和多层感知机分别作为生成模型和判别模型进行对抗训练,获得生成损失(L_(G))和判别损失(L_(D));提出同步联合优化策略,依次在L_(R)的k步、L_(D)的k步和L_(G)的1步之间优化表示,并将其应用于链路预测和节点聚类。在引文数据集上的实验结果表明,所提出的AGE-SJO性能优越,与最强基线相比,AUC、AP、ACC、NMI和ARI指标可分别提升1.6%、2.1%、10.6%、4.9%和12.4%。