With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in pra...With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in practice.Therefore,distributed machine learning(DML) and semi-supervised learning methods which help solve these problems have been addressed in both academia and industry.In this paper,the semi-supervised learning method and the data parallelism DML framework are combined.The pseudo-label based local loss function for each distributed node is studied,and the stochastic gradient descent(SGD) based distributed parameter update principle is derived.A demo that implements the pseudo-label based semi-supervised learning in the DML framework is conducted,and the CIFAR-10 dataset for target classification is used to evaluate the performance.Experimental results confirm the convergence and the accuracy of the model using the pseudo-label based semi-supervised learning in the DML framework.Given the proportion of the pseudo-label dataset is 20%,the accuracy of the model is over 90% when the value of local parameter update steps between two global aggregations is less than 5.Besides,fixing the global aggregations interval to 3,the model converges with acceptable performance degradation when the proportion of the pseudo-label dataset varies from 20% to 80%.展开更多
Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks...Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.展开更多
This paper studies the optimization problem of heterogeneous networks under a timevarying topology.Each agent only accesses to one local objective function,which is nonsmooth.An improved algorithm with noisy measureme...This paper studies the optimization problem of heterogeneous networks under a timevarying topology.Each agent only accesses to one local objective function,which is nonsmooth.An improved algorithm with noisy measurement of local objective functions' sub-gradients and additive noises among information exchanging between each pair of agents is designed to minimize the sum of objective functions of all agents.To weaken the effect of these noises,two step sizes are introduced in the control protocol.By graph theory,stochastic analysis and martingale convergence theory,it is proved that if the sub-gradients are uniformly bounded,the sequence of digraphs is balanced and the union graph of all digraphs is joint strongly connected,then the designed control protocol can force all agents to find the global optimal point almost surely.At last,the authors give some numerical examples to verify the effectiveness of the stochastic sub-gradient algorithms.展开更多
软件定义网络(software defined networking, SDN)解耦了网络的数据层与控制层,同时控制器也面临"单点失效"的危险.攻击者可以发起分布式拒绝服务攻击(distributed denial of service, DDoS)使控制器失效,影响网络安全.为解决...软件定义网络(software defined networking, SDN)解耦了网络的数据层与控制层,同时控制器也面临"单点失效"的危险.攻击者可以发起分布式拒绝服务攻击(distributed denial of service, DDoS)使控制器失效,影响网络安全.为解决SDN中的DDoS流量检测问题,创新性地提出了基于信息熵与深度神经网络(deep neural network, DNN)的DDoS检测模型.该模型包括基于信息熵的初检模块和基于深度神经网络DNN的DDoS流量检测模块.初检模块通过计算数据包源、目的IP地址的信息熵值初步发现网络中的可疑流量,并利用基于DNN的DDoS检测模块对疑似异常流量进行进一步确认,从而发现DDoS攻击.实验表明:该模型对DDoS流量的识别率达到99%以上,准确率也有显著提高,误报率明显优于基于信息熵的检测方法.同时,该模型还能缩短检测时间,提高资源使用效率.展开更多
配电网作为电力系统的关键环节,有必要识别配电网潜在危害,避免失稳。为了解决数据中噪声干扰的问题并提高态势预测准确性,提出了一种基于深度学习的配电网安全态势感知方法。首先,采集配电网运行量,利用奇异值分解(singular value deco...配电网作为电力系统的关键环节,有必要识别配电网潜在危害,避免失稳。为了解决数据中噪声干扰的问题并提高态势预测准确性,提出了一种基于深度学习的配电网安全态势感知方法。首先,采集配电网运行量,利用奇异值分解(singular value decomposition,SVD)对运行量进行降噪;其次,分析运行量与安全态势的关系,采用评估值指标评估配电网态势;最后,利用注意力时域卷积网络(temporal convolution network-attention mechanism,TCNAM)对降噪后的输入数据预测得出态势评估值,预测配电网潜在危害,若失稳,则发出预警信号。通过对IEEE 33节点系统和实际配电网系统仿真可知,TCN-AM预测效果好,且进行降噪处理后预测准确性有所提高,能够在满足预警条件后,发出相应的预警信号。所提方法在降噪处理后能够更准确地实现配电网的安全态势感知。展开更多
This work aims to explore the restoration of images corrupted by impulse noise via distribution-transformed network (DTN), which utilizes convolutional neural network to learn pixel-distribution features from noisy im...This work aims to explore the restoration of images corrupted by impulse noise via distribution-transformed network (DTN), which utilizes convolutional neural network to learn pixel-distribution features from noisy images. Compared with the traditional median-based algorithms, it avoids the complicated pre-processing procedure and directly tackles the original image. Additionally, different from the traditional methods utilizing the spatial neighbor information around the pixels or patches and optimizing in an iterative manner, this work turns to capture the pixel-level distribution information by means of wide and transformed network learning. DTN fits the distribution at pixel-level with larger receptions and more channels. Furthermore, DTN utilities a residual block without batch normalization layer to generate a good estimate. In terms of edge preservation and noise suppression, the proposed DTN consistently achieves significantly superior performance than current state-of-the-art methods, particularly at extreme noise densities.展开更多
无线信号之间的干扰阻碍了信号的并发传输,降低了无线网络的吞吐量.链路调度是提高无线网络吞吐量、减少信号传输延迟的一种有效方法.因为SINR (signal to interference plus noise ratio)模型准确地描述了无线信号传播的固有特性,能够...无线信号之间的干扰阻碍了信号的并发传输,降低了无线网络的吞吐量.链路调度是提高无线网络吞吐量、减少信号传输延迟的一种有效方法.因为SINR (signal to interference plus noise ratio)模型准确地描述了无线信号传播的固有特性,能够真实反映无线信号之间的干扰,提出一种在动态无线网络中基于SINR模型的常数近似因子的在线分布式链路调度算法(OLD_LS).在线的意思是指,在算法执行的过程中任意节点可以随时加入网络,也可以随时离开网络.节点任意加入网络或者从网络中离开体现了无线网络的动态变化的特性. OLD_LS算法把网络区域划分为多个正六边形,局部化SINR模型的全局干扰.设计动态网络下的领导者选举算法(LE),只要网络节点的动态变化速率小于1/ε, LE就可以在O(log n+log R)ε≤5(1-21-α/2)/6,α表示路径损耗指数, n是网络节点的规模, R是最长链路的长度.根据文献调研,所提算法是第1个用于动态无线网络的在线分布式链路调度算法.展开更多
为了解决冲击噪声下长短时记忆(long short term memory,LSTM)神经网络调制信号识别方法抗冲击噪声能力弱和超参数难以确定的问题,本文提出了一种演化长短时记忆神经网络的调制识别方法。利用基于短时傅里叶变换的卷积神经网络(convolut...为了解决冲击噪声下长短时记忆(long short term memory,LSTM)神经网络调制信号识别方法抗冲击噪声能力弱和超参数难以确定的问题,本文提出了一种演化长短时记忆神经网络的调制识别方法。利用基于短时傅里叶变换的卷积神经网络(convolution neural network,CNN)去噪模型对数据集去噪;结合量子计算机制和旗鱼优化器(sailfish optimizer,SFO)设计了量子旗鱼算法(quantum sailfish algorithm,QSFA)去演化LSTM神经网络以获得最优的超参数;使用演化长短时记忆神经网络作为分类器进行自动调制信号识别。仿真结果表明,采用所设计的CNN去噪和演化长短时记忆神经网络模型,识别准确率有了大幅度的提高。量子旗鱼算法演化LSTM神经网络模型降低了传统LSTM神经网络容易陷于局部极小值或者过拟合的概率,当混合信噪比为0 dB,所提方法对11种调制信号的平均识别准确率达到90%以上。展开更多
脉冲噪声广泛存在于电力线通信(power line communication,PLC)系统中,会严重影响系统的通信性能。电力线脉冲噪声的建模通常使用α稳定分布模型,为达到最佳的脉冲噪声抑制效果,需要知道脉冲噪声的类型和相关参数。为此,文章提出一种基...脉冲噪声广泛存在于电力线通信(power line communication,PLC)系统中,会严重影响系统的通信性能。电力线脉冲噪声的建模通常使用α稳定分布模型,为达到最佳的脉冲噪声抑制效果,需要知道脉冲噪声的类型和相关参数。为此,文章提出一种基于混合神经网络的符合α稳定分布的脉冲噪声参数估计方法。不同于传统的方法,本方法可以分别独立地估计α稳定分布的重要参数α(即特征指数)和γ(即尺度参数)。仿真结果表明,与传统方法相比,提出的方法具有更准确的参数估计性能,归一化均方误差值仅为10–4左右。展开更多
A large number of load power and power output of distributed generation in an active distribution network(ADN)are uncertain,which causes the classical affine power flow method to encounter problems of interval expansi...A large number of load power and power output of distributed generation in an active distribution network(ADN)are uncertain,which causes the classical affine power flow method to encounter problems of interval expansion and low efficiency when applied to an AND.This then leads to errors of interval power flow data sources in the cyber physical system(CPS)of an ADN.In order to improve the accuracy of interval power flow data in the CPS of an ADN,an affine power flow method of an ADN for restraining interval expansion is proposed.Aiming at the expansion of interval results caused by the approximation error of non-affine operations in an affine power flow method,the approximation method of the new noise source coefficient is improved,and it is proved that the improved method is superior to the classical method in restraining interval expansion.To overcome the decrease of computational efficiency caused by new noise sources,a novel merging method of new noise sources in an iterative process is designed.Simulation tests are conducted on an IEEE 33-bus,PG&E 69-bus and an actual 1180-bus system,which proves the validity of the proposed affine power flow method and its advantages in terms of computational efficiency and restraining interval expansion.展开更多
基金Supported by National Natural Science Foundation of China (60874063) and Innovation and Scientific Research Foundation of Graduate Student of Heilongjiang Province (YJSCX2012-263HLJ)
基金Supported by the National Key R&D Program of China(No.2020YFC1807904)the Natural Science Foundation of Beijing Municipality(No.L192002)the National Natural Science Foundation of China(No.U1633115)。
文摘With the emergence of various intelligent applications,machine learning technologies face lots of challenges including large-scale models,application oriented real-time dataset and limited capabilities of nodes in practice.Therefore,distributed machine learning(DML) and semi-supervised learning methods which help solve these problems have been addressed in both academia and industry.In this paper,the semi-supervised learning method and the data parallelism DML framework are combined.The pseudo-label based local loss function for each distributed node is studied,and the stochastic gradient descent(SGD) based distributed parameter update principle is derived.A demo that implements the pseudo-label based semi-supervised learning in the DML framework is conducted,and the CIFAR-10 dataset for target classification is used to evaluate the performance.Experimental results confirm the convergence and the accuracy of the model using the pseudo-label based semi-supervised learning in the DML framework.Given the proportion of the pseudo-label dataset is 20%,the accuracy of the model is over 90% when the value of local parameter update steps between two global aggregations is less than 5.Besides,fixing the global aggregations interval to 3,the model converges with acceptable performance degradation when the proportion of the pseudo-label dataset varies from 20% to 80%.
文摘Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.
基金supported by the National Natural Science Foundation of China under Grant No.61973329National Key Technology R&D Program of China under Grant No.2021YFD2100605Project of Beijing Municipal University Teacher Team Construction Support Plan under Grant No.BPHR20220104。
文摘This paper studies the optimization problem of heterogeneous networks under a timevarying topology.Each agent only accesses to one local objective function,which is nonsmooth.An improved algorithm with noisy measurement of local objective functions' sub-gradients and additive noises among information exchanging between each pair of agents is designed to minimize the sum of objective functions of all agents.To weaken the effect of these noises,two step sizes are introduced in the control protocol.By graph theory,stochastic analysis and martingale convergence theory,it is proved that if the sub-gradients are uniformly bounded,the sequence of digraphs is balanced and the union graph of all digraphs is joint strongly connected,then the designed control protocol can force all agents to find the global optimal point almost surely.At last,the authors give some numerical examples to verify the effectiveness of the stochastic sub-gradient algorithms.
基金the National Natural Science Founding of China (Nos. 61362001, 61362009 and 61661031)the Jiangxi Advanced Project for Post-Doctoral Research Fund (No. 2014KY02)+1 种基金the Young and Key Scientist Training Plan of Jiangxi Province (Nos. 20162BCB23019, 20171BBH80023 and GJJ170566)the Fund for Postgraduate of Nanchang University (No. CX2018144)。
文摘This work aims to explore the restoration of images corrupted by impulse noise via distribution-transformed network (DTN), which utilizes convolutional neural network to learn pixel-distribution features from noisy images. Compared with the traditional median-based algorithms, it avoids the complicated pre-processing procedure and directly tackles the original image. Additionally, different from the traditional methods utilizing the spatial neighbor information around the pixels or patches and optimizing in an iterative manner, this work turns to capture the pixel-level distribution information by means of wide and transformed network learning. DTN fits the distribution at pixel-level with larger receptions and more channels. Furthermore, DTN utilities a residual block without batch normalization layer to generate a good estimate. In terms of edge preservation and noise suppression, the proposed DTN consistently achieves significantly superior performance than current state-of-the-art methods, particularly at extreme noise densities.
文摘为了解决冲击噪声下长短时记忆(long short term memory,LSTM)神经网络调制信号识别方法抗冲击噪声能力弱和超参数难以确定的问题,本文提出了一种演化长短时记忆神经网络的调制识别方法。利用基于短时傅里叶变换的卷积神经网络(convolution neural network,CNN)去噪模型对数据集去噪;结合量子计算机制和旗鱼优化器(sailfish optimizer,SFO)设计了量子旗鱼算法(quantum sailfish algorithm,QSFA)去演化LSTM神经网络以获得最优的超参数;使用演化长短时记忆神经网络作为分类器进行自动调制信号识别。仿真结果表明,采用所设计的CNN去噪和演化长短时记忆神经网络模型,识别准确率有了大幅度的提高。量子旗鱼算法演化LSTM神经网络模型降低了传统LSTM神经网络容易陷于局部极小值或者过拟合的概率,当混合信噪比为0 dB,所提方法对11种调制信号的平均识别准确率达到90%以上。
文摘脉冲噪声广泛存在于电力线通信(power line communication,PLC)系统中,会严重影响系统的通信性能。电力线脉冲噪声的建模通常使用α稳定分布模型,为达到最佳的脉冲噪声抑制效果,需要知道脉冲噪声的类型和相关参数。为此,文章提出一种基于混合神经网络的符合α稳定分布的脉冲噪声参数估计方法。不同于传统的方法,本方法可以分别独立地估计α稳定分布的重要参数α(即特征指数)和γ(即尺度参数)。仿真结果表明,与传统方法相比,提出的方法具有更准确的参数估计性能,归一化均方误差值仅为10–4左右。
基金supported by International Cooperation and Exchange of the National Natural Science Foundation of China(Grant No.52061635104).
文摘A large number of load power and power output of distributed generation in an active distribution network(ADN)are uncertain,which causes the classical affine power flow method to encounter problems of interval expansion and low efficiency when applied to an AND.This then leads to errors of interval power flow data sources in the cyber physical system(CPS)of an ADN.In order to improve the accuracy of interval power flow data in the CPS of an ADN,an affine power flow method of an ADN for restraining interval expansion is proposed.Aiming at the expansion of interval results caused by the approximation error of non-affine operations in an affine power flow method,the approximation method of the new noise source coefficient is improved,and it is proved that the improved method is superior to the classical method in restraining interval expansion.To overcome the decrease of computational efficiency caused by new noise sources,a novel merging method of new noise sources in an iterative process is designed.Simulation tests are conducted on an IEEE 33-bus,PG&E 69-bus and an actual 1180-bus system,which proves the validity of the proposed affine power flow method and its advantages in terms of computational efficiency and restraining interval expansion.