针对分布式电源和新型负荷容量累积造成负荷影响因素多元化和不确定性特性增强的问题,文中提出一种采用记忆神经网络和曲线形状修正的负荷预测方法。在负荷峰值预测中,采用最大信息系数计算负荷峰值与影响因素的非线性相关性,实现对输...针对分布式电源和新型负荷容量累积造成负荷影响因素多元化和不确定性特性增强的问题,文中提出一种采用记忆神经网络和曲线形状修正的负荷预测方法。在负荷峰值预测中,采用最大信息系数计算负荷峰值与影响因素的非线性相关性,实现对输入特征的筛选;综合考虑负荷峰值序列的长短期自相关性和输入特征与负荷峰值的不同程度相关性,结合Attention机制和双向长短时记忆(bidirectional long short-term memory,BiLSTM)神经网络建立负荷峰值预测模型。在负荷标幺曲线预测中,通过误差倒数法组合相似日和相邻日,建立负荷标幺曲线预测模型;针对预测偏差的非平稳特征,利用自适应噪声的完全集成经验模态分解和BiLSTM网络建立误差预测模型,对曲线形状进行修正。应用中国北方某城市的区域电网负荷数据为算例,验证了所提模型的有效性。展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进...针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进行文本特征向量的提取,并获得上下文语义信息;将预训练提取的文本特征作为Seq2Seq-Attention(Sequence to Sequence-Attention)模型的输入序列,构建标签字典以获取多标签间的关联关系.将分类模型在3种数据集上分别进行对比实验,结果表明:模型分类的效果F1值均超过90%.该模型不仅能提高档案文本的多标签分类效果,也能关注标签之间的相关关系.展开更多
文摘针对分布式电源和新型负荷容量累积造成负荷影响因素多元化和不确定性特性增强的问题,文中提出一种采用记忆神经网络和曲线形状修正的负荷预测方法。在负荷峰值预测中,采用最大信息系数计算负荷峰值与影响因素的非线性相关性,实现对输入特征的筛选;综合考虑负荷峰值序列的长短期自相关性和输入特征与负荷峰值的不同程度相关性,结合Attention机制和双向长短时记忆(bidirectional long short-term memory,BiLSTM)神经网络建立负荷峰值预测模型。在负荷标幺曲线预测中,通过误差倒数法组合相似日和相邻日,建立负荷标幺曲线预测模型;针对预测偏差的非平稳特征,利用自适应噪声的完全集成经验模态分解和BiLSTM网络建立误差预测模型,对曲线形状进行修正。应用中国北方某城市的区域电网负荷数据为算例,验证了所提模型的有效性。
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
文摘针对现有的数字化档案多标签分类方法存在分类标签之间缺少关联性的问题,提出一种用于档案多标签分类的深层神经网络模型ALBERT-Seq2Seq-Attention.该模型通过ALBERT(A Little BERT)预训练语言模型内部多层双向的Transfomer结构获取进行文本特征向量的提取,并获得上下文语义信息;将预训练提取的文本特征作为Seq2Seq-Attention(Sequence to Sequence-Attention)模型的输入序列,构建标签字典以获取多标签间的关联关系.将分类模型在3种数据集上分别进行对比实验,结果表明:模型分类的效果F1值均超过90%.该模型不仅能提高档案文本的多标签分类效果,也能关注标签之间的相关关系.