The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key...The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident.展开更多
A distributed generation system(DG)has several benefits over a traditional centralized power system.However,the protection area in the case of the distributed generator requires special attention as it encounters stab...A distributed generation system(DG)has several benefits over a traditional centralized power system.However,the protection area in the case of the distributed generator requires special attention as it encounters stability loss,failure re-closure,fluctuations in voltage,etc.And thereby,it demands immediate attention in identifying the location&type of a fault without delay especially when occurred in a small,distributed generation system,as it would adversely affect the overall system and its operation.In the past,several methods were proposed for classification and localisation of a fault in a distributed generation system.Many of those methods were accurate in identifying location,but the accuracy in identifying the type of fault was not up to the acceptable mark.The proposed work here uses a shallow artificial neural network(sANN)model for identifying a particular type of fault that could happen in a specific distribution network when used in conjunction with distributed generators.Firstly,a distribution network consisting of two similar distributed generators(DG1 and DG2),one grid,and a 100 Km distribution line is modeled.Thereafter,different voltages and currents corresponding to various faults(line to line,line to ground)at different locations are tabulated,resulting in a matrix of 500×18 inputs.Secondly,the sANN is formulated for identifying the types of faults in the system in which the above-obtained data is used to train,validate,and test the neural network.The overall result shows an unprecedented almost zero percent error in identifying the type of the faults.展开更多
In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations a...In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.展开更多
In this work,an Artificial Neural Network(ANN)based technique is suggested for classifying the faults which occur in hybrid power distribution systems.Power,which is generated by the solar and wind energy-based hybrid...In this work,an Artificial Neural Network(ANN)based technique is suggested for classifying the faults which occur in hybrid power distribution systems.Power,which is generated by the solar and wind energy-based hybrid system,is given to the grid at the Point of Common Coupling(PCC).A boost converter along with perturb and observe(P&O)algorithm is utilized in this system to obtain a constant link voltage.In contrast,the link voltage of the wind energy conversion system(WECS)is retained with the assistance of a Proportional Integral(PI)controller.The grid synchronization is tainted with the assis-tance of the d-q theory.For the analysis of faults like islanding,line-ground,and line-line fault,the ANN is utilized.The voltage signal is observed at the PCC,and the Discrete Wavelet Transform(DWT)is employed to obtain different features.Based on the collected features,the ANN classifies the faults in an effi-cient manner.The simulation is done in MATLAB and the results are also validated through the hardware implementation.Detailed fault analysis is carried out and the results are compared with the existing techniques.Finally,the Total harmonic distortion(THD)is lessened by 4.3%by using the proposed methodology.展开更多
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz...Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.展开更多
Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effe...Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effect of OCR based on Artificial Intelligence(AI)algorithms,in which the different AI algorithms for OCR analysis are classified and reviewed.Firstly,the mechanisms and characteristics of artificial neural network-based OCR are summarized.Secondly,this paper explores machine learning-based OCR,and draws the conclusion that the algorithms available for this form of OCR are still in their infancy,with low generalization and fixed recognition errors,albeit with better recognition effect and higher recognition accuracy.Finally,this paper explores several of the latest algorithms such as deep learning and pattern recognition algorithms.This paper concludes that OCR requires algorithms with higher recognition accuracy.展开更多
Hornik, Stinchcombe & White have shown that the multilayer feed forward networks with enough hidden layers are universal approximators. Roux & Bengio have proved that adding hidden units yield a strictly impro...Hornik, Stinchcombe & White have shown that the multilayer feed forward networks with enough hidden layers are universal approximators. Roux & Bengio have proved that adding hidden units yield a strictly improved modeling power, and Restricted Boltzmann Machines (RBM) are universal approximators of discrete distributions. In this paper, we provide yet another proof. The advantage of this new proof is that it will lead to several new learning algorithms. We prove that the Deep Neural Networks implement an expansion and the expansion is complete. First, we briefly review the basic Boltzmann Machine and that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θ-transformation;therefore, an ABM can simulate any distribution. We discuss how to convert an ABM into a Deep Neural Network. Finally, by establishing the equivalence between an ABM and the Deep Neural Network, we prove that the Deep Neural Network is complete.展开更多
Recently,we demonstrated the success of a time-synchronized state estimator using deep neural networks(DNNs)for real-time unobservable distribution systems.In this paper,we provide analytical bounds on the performance...Recently,we demonstrated the success of a time-synchronized state estimator using deep neural networks(DNNs)for real-time unobservable distribution systems.In this paper,we provide analytical bounds on the performance of the state estimator as a function of perturbations in the input measurements.It has already been shown that evaluating performance based only on the test dataset might not effectively indicate the ability of a trained DNN to handle input perturbations.As such,we analytically verify the robustness and trustworthiness of DNNs to input perturbations by treating them as mixed-integer linear programming(MILP)problems.The ability of batch normalization in addressing the scalability limitations of the MILP formulation is also highlighted.The framework is validated by performing time-synchronized distribution system state estimation for a modified IEEE 34-node system and a real-world large distribution system,both of which are incompletely observed by micro-phasor measurement units.展开更多
The access of unified power flow controllers(UPFC)has changed the structure and operation mode of power grids all across the world,and it has brought severe challenges to the traditional real-time calculation of secur...The access of unified power flow controllers(UPFC)has changed the structure and operation mode of power grids all across the world,and it has brought severe challenges to the traditional real-time calculation of security correction based on traditionalmodels.Considering the limitation of computational efficiency regarding complex,physical models,a data-driven power system security correction method with UPFC is,in this paper,proposed.Based on the complex mapping relationship between the operation state data and the security correction strategy,a two-stage deep neural network(DNN)learning framework is proposed,which divides the offline training task of security correction into two stages:in the first stage,the stacked auto-encoder(SAE)classification model is established,and the node correction state(0/1)output based on the fault information;in the second stage,the DNN learningmodel is established,and the correction amount of each action node is obtained based on the action nodes output in the previous stage.In this paper,the UPFC demonstration project of NanjingWest Ring Network is taken as a case study to validate the proposed method.The results show that the proposed method can fully meet the real-time security correction time requirements of power grids,and avoid the inherent defects of the traditional model method without an iterative solution and can also provide reasonable security correction strategies for N-1 and N-2 faults.展开更多
Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks...Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.展开更多
Reactive power optimization of distribution networks is traditionally addressed by physical model based methods,which often lead to locally optimal solutions and require heavy online inference time consumption.To impr...Reactive power optimization of distribution networks is traditionally addressed by physical model based methods,which often lead to locally optimal solutions and require heavy online inference time consumption.To improve the quality of the solution and reduce the inference time burden,this paper proposes a new graph attention networks based method to directly map the complex nonlinear relationship between graphs(topology and power loads)and reactive power scheduling schemes of distribution networks,from a data-driven perspective.The graph attention network is tailored specifically to this problem and incorporates several innovative features such as a self-loop in the adjacency matrix,a customized loss function,and the use of max-pooling layers.Additionally,a rulebased strategy is proposed to adjust infeasible solutions that violate constraints.Simulation results on multiple distribution networks demonstrate that the proposed method outperforms other machine learning based methods in terms of the solution quality and robustness to varying load conditions.Moreover,its online inference time is significantly faster than traditional physical model based methods,particularly for large-scale distribution networks.展开更多
Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i...Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.展开更多
With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recogn...With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recognized as a powerful tool for nonlinear system modeling. To characterize the behavior of nonlinear circuits, a DNN based modeling approach is proposed in this paper. The procedure is illustrated by modeling a power amplifier (PA), which is a typical nonlinear circuit in electronic systems. The PA model is constructed based on a feedforward neural network with three hidden layers, and then Multisim circuit simulator is applied to generating the raw training data. Training and validation are carried out in Tensorflow deep learning framework. Compared with the commonly used polynomial model, the proposed DNN model exhibits a faster convergence rate and improves the mean squared error by 13 dB. The results demonstrate that the proposed DNN model can accurately depict the input-output characteristics of nonlinear circuits in both training and validation data sets.展开更多
Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability ...Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability of network topology and line impedance in many distribution networks,physical model-based methods may not be applicable to their operations.To tackle this challenge,some studies have proposed constraint learning,which replicates physical models by training a neural network to evaluate feasibility of a decision(i.e.,whether a decision satisfies all critical constraints or not).To ensure accuracy of this trained neural network,training set should contain sufficient feasible and infeasible samples.However,since ADNs are mostly operated in a normal status,only very few historical samples are infeasible.Thus,the historical dataset is highly imbalanced,which poses a significant obstacle to neural network training.To address this issue,we propose an enhanced constraint learning method.First,it leverages constraint learning to train a neural network as surrogate of ADN's model.Then,it introduces Synthetic Minority Oversampling Technique to generate infeasible samples to mitigate imbalance of historical dataset.By incorporating historical and synthetic samples into the training set,we can significantly improve accuracy of neural network.Furthermore,we establish a trust region to constrain and thereafter enhance reliability of the solution.Simulations confirm the benefits of the proposed method in achieving desirable optimality and feasibility while maintaining low computational complexity.展开更多
Ensuring stability and reliability in power systems requires accurate state estimation, which is challenging due to the growing network size, noisy measurements, and nonlinear power-flow equations. In this paper, we i...Ensuring stability and reliability in power systems requires accurate state estimation, which is challenging due to the growing network size, noisy measurements, and nonlinear power-flow equations. In this paper, we introduce the Graph Attention Estimation Network (GAEN) model to tackle power system state estimation (PSSE) by capitalizing on the inherent graph structure of power grids. This approach facilitates efficient information exchange among interconnected buses, yielding a distributed, computationally efficient architecture that is also resilient to cyber-attacks. We develop a thorough approach by utilizing Graph Convolutional Neural Networks (GCNNs) and attention mechanism in PSSE based on Supervisory Control and Data Acquisition (SCADA) and Phasor Measurement Unit (PMU) measurements, addressing the limitations of previous learning architectures. In accordance with the empirical results obtained from the experiments, the proposed method demonstrates superior performance and scalability compared to existing techniques. Furthermore, the amalgamation of local topological configurations with nodal-level data yields a heightened efficacy in the domain of state estimation. This work marks a significant achievement in the design of advanced learning architectures in PSSE, contributing and fostering the development of more reliable and secure power system operations.展开更多
Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the detai...Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the details of coronary calcification in vivo.In routine clinical practice,it is a time-consuming and laborious task for clinicians to review the over 250 images in a single pullback.Besides,the imbalance label distribution within the entire pullbacks is another problem,which could lead to the failure of the classifier model.Given the success of deep learning methods with other imaging modalities,a thorough understanding of calcified plaque detection using Convolutional Neural Networks(CNNs)within pullbacks for future clinical decision was required.Methods All 33 IVOCT clinical pullbacks of 33 patients were taken from Affiliated Drum Tower Hospital,Nanjing University between December 2017 and December 2018.For ground-truth annotation,three trained experts determined the type of plaque that was present in a B-Scan.The experts assigned the labels'no calcified plaque','calcified plaque'for each OCT image.All experts were provided the all images for labeling.The final label was determined based on consensus between the experts,different opinions on the plaque type were resolved by asking the experts for a repetition of their evaluation.Before the implement of algorithm,all OCT images was resized to a resolution of 300×300,which matched the range used with standard architectures in the natural image domain.In the study,we randomly selected 26 pullbacks for training,the remaining data were testing.While,imbalance label distribution within entire pullbacks was great challenge for various CNNs architecture.In order to resolve the problem,we designed the following experiment.First,we fine-tuned twenty different CNNs architecture,including customize CNN architectures and pretrained CNN architectures.Considering the nature of OCT images,customize CNN architectures were designed that the layers were fewer than 25 layers.Then,three with good performance were selected and further deep fine-tuned to train three different models.The difference of CNNs was mainly in the model architecture,such as depth-based residual networks,width-based inception networks.Finally,the three CNN models were used to majority voting,the predicted labels were from the most voting.Areas under the receiver operating characteristic curve(ROC AUC)were used as the evaluation metric for the imbalance label distribution.Results The imbalance label distribution within pullbacks affected both convergence during the training phase and generalization of a CNN model.Different labels of OCT images could be classified with excellent performance by fine tuning parameters of CNN architectures.Overall,we find that our final result performed best with an accuracy of 90%of'calcified plaque'class,which the numbers were less than'no calcified plaque'class in one pullback.Conclusions The obtained results showed that the method is fast and effective to classify calcific plaques with imbalance label distribution in each pullback.The results suggest that the proposed method could be facilitating our understanding of coronary artery calcification in the process of atherosclerosis andhelping guide complex interventional strategies in coronary arteries with superficial calcification.展开更多
Accurate photovoltaic(PV)power prediction can effectively help the power sector to make rational energy planning and dispatching decisions,promote PV consumption,make full use of renewable energy and alleviate energy ...Accurate photovoltaic(PV)power prediction can effectively help the power sector to make rational energy planning and dispatching decisions,promote PV consumption,make full use of renewable energy and alleviate energy problems.To address this research objective,this paper proposes a prediction model based on kernel principal component analysis(KPCA),modified cuckoo search algorithm(MCS)and deep convolutional neural networks(DCNN).Firstly,KPCA is utilized to reduce the dimension of the feature,which aims to reduce the redundant input vectors.Then using MCS to optimize the parameters of DCNN.Finally,the photovoltaic power forecasting method of KPCA-MCS-DCNN is established.In order to verify the prediction performance of the proposed model,this paper selects a photovoltaic power station in China for example analysis.The results show that the new hybrid KPCA-MCS-DCNN model has higher prediction accuracy and better robustness.展开更多
In recent years,deep neural networks have become a fascinating and influential research subject,and they play a critical role in video processing and analytics.Since,video analytics are predominantly hardware centric,...In recent years,deep neural networks have become a fascinating and influential research subject,and they play a critical role in video processing and analytics.Since,video analytics are predominantly hardware centric,exploration of implementing the deep neural networks in the hardware needs its brighter light of research.However,the computational complexity and resource constraints of deep neural networks are increasing exponentially by time.Convolutional neural networks are one of the most popular deep learning architecture especially for image classification and video analytics.But these algorithms need an efficient implement strategy for incorporating more real time computations in terms of handling the videos in the hardware.Field programmable Gate arrays(FPGA)is thought to be more advantageous in implementing the convolutional neural networks when compared to Graphics Processing Unit(GPU)in terms of energy efficient and low computational complexity.But still,an intelligent architecture is required for implementing the CNN in FPGA for processing the videos.This paper introduces a modern high-performance,energy-efficient Bat Pruned Ensembled Convolutional networks(BPEC-CNN)for processing the video in the hardware.The system integrates the Bat Evolutionary Pruned layers for CNN and implements the new shared Distributed Filtering Structures(DFS)for handing the filter layers in CNN with pipelined data-path in FPGA.In addition,the proposed system adopts the hardware-software co-design methodology for an energy efficiency and less computational complexity.The extensive experimentations are carried out using CASIA video datasets with ARTIX-7 FPGA boards(number)and various algorithms centric parameters such as accuracy,sensitivity,specificity and architecture centric parameters such as the power,area and throughput are analyzed.These results are then compared with the existing pruned CNN architectures such as CNN-Prunner in which the proposed architecture has been shown 25%better performance than the existing architectures.展开更多
Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability...Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.展开更多
基金supported by the Science and Technology Project of State Grid Corporation of China(4000-202122070A-0-0-00).
文摘The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident.
文摘A distributed generation system(DG)has several benefits over a traditional centralized power system.However,the protection area in the case of the distributed generator requires special attention as it encounters stability loss,failure re-closure,fluctuations in voltage,etc.And thereby,it demands immediate attention in identifying the location&type of a fault without delay especially when occurred in a small,distributed generation system,as it would adversely affect the overall system and its operation.In the past,several methods were proposed for classification and localisation of a fault in a distributed generation system.Many of those methods were accurate in identifying location,but the accuracy in identifying the type of fault was not up to the acceptable mark.The proposed work here uses a shallow artificial neural network(sANN)model for identifying a particular type of fault that could happen in a specific distribution network when used in conjunction with distributed generators.Firstly,a distribution network consisting of two similar distributed generators(DG1 and DG2),one grid,and a 100 Km distribution line is modeled.Thereafter,different voltages and currents corresponding to various faults(line to line,line to ground)at different locations are tabulated,resulting in a matrix of 500×18 inputs.Secondly,the sANN is formulated for identifying the types of faults in the system in which the above-obtained data is used to train,validate,and test the neural network.The overall result shows an unprecedented almost zero percent error in identifying the type of the faults.
文摘In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm.
文摘In this work,an Artificial Neural Network(ANN)based technique is suggested for classifying the faults which occur in hybrid power distribution systems.Power,which is generated by the solar and wind energy-based hybrid system,is given to the grid at the Point of Common Coupling(PCC).A boost converter along with perturb and observe(P&O)algorithm is utilized in this system to obtain a constant link voltage.In contrast,the link voltage of the wind energy conversion system(WECS)is retained with the assistance of a Proportional Integral(PI)controller.The grid synchronization is tainted with the assis-tance of the d-q theory.For the analysis of faults like islanding,line-ground,and line-line fault,the ANN is utilized.The voltage signal is observed at the PCC,and the Discrete Wavelet Transform(DWT)is employed to obtain different features.Based on the collected features,the ANN classifies the faults in an effi-cient manner.The simulation is done in MATLAB and the results are also validated through the hardware implementation.Detailed fault analysis is carried out and the results are compared with the existing techniques.Finally,the Total harmonic distortion(THD)is lessened by 4.3%by using the proposed methodology.
文摘Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.
基金supported by science and technology projects of Gansu State Grid Corporation of China(52272220002U).
文摘Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effect of OCR based on Artificial Intelligence(AI)algorithms,in which the different AI algorithms for OCR analysis are classified and reviewed.Firstly,the mechanisms and characteristics of artificial neural network-based OCR are summarized.Secondly,this paper explores machine learning-based OCR,and draws the conclusion that the algorithms available for this form of OCR are still in their infancy,with low generalization and fixed recognition errors,albeit with better recognition effect and higher recognition accuracy.Finally,this paper explores several of the latest algorithms such as deep learning and pattern recognition algorithms.This paper concludes that OCR requires algorithms with higher recognition accuracy.
文摘Hornik, Stinchcombe & White have shown that the multilayer feed forward networks with enough hidden layers are universal approximators. Roux & Bengio have proved that adding hidden units yield a strictly improved modeling power, and Restricted Boltzmann Machines (RBM) are universal approximators of discrete distributions. In this paper, we provide yet another proof. The advantage of this new proof is that it will lead to several new learning algorithms. We prove that the Deep Neural Networks implement an expansion and the expansion is complete. First, we briefly review the basic Boltzmann Machine and that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θ-transformation;therefore, an ABM can simulate any distribution. We discuss how to convert an ABM into a Deep Neural Network. Finally, by establishing the equivalence between an ABM and the Deep Neural Network, we prove that the Deep Neural Network is complete.
基金supported in part by the Department of Energy(No.DE-AR-0001001,No.DE-EE0009355)the National Science Foundation(NSF)(No.ECCS-2145063)。
文摘Recently,we demonstrated the success of a time-synchronized state estimator using deep neural networks(DNNs)for real-time unobservable distribution systems.In this paper,we provide analytical bounds on the performance of the state estimator as a function of perturbations in the input measurements.It has already been shown that evaluating performance based only on the test dataset might not effectively indicate the ability of a trained DNN to handle input perturbations.As such,we analytically verify the robustness and trustworthiness of DNNs to input perturbations by treating them as mixed-integer linear programming(MILP)problems.The ability of batch normalization in addressing the scalability limitations of the MILP formulation is also highlighted.The framework is validated by performing time-synchronized distribution system state estimation for a modified IEEE 34-node system and a real-world large distribution system,both of which are incompletely observed by micro-phasor measurement units.
基金supported in part by Science and Technology Projects of Electric Power Research Institute of State Grid Jiangsu Electric Power Co.,Ltd.(J2021171).
文摘The access of unified power flow controllers(UPFC)has changed the structure and operation mode of power grids all across the world,and it has brought severe challenges to the traditional real-time calculation of security correction based on traditionalmodels.Considering the limitation of computational efficiency regarding complex,physical models,a data-driven power system security correction method with UPFC is,in this paper,proposed.Based on the complex mapping relationship between the operation state data and the security correction strategy,a two-stage deep neural network(DNN)learning framework is proposed,which divides the offline training task of security correction into two stages:in the first stage,the stacked auto-encoder(SAE)classification model is established,and the node correction state(0/1)output based on the fault information;in the second stage,the DNN learningmodel is established,and the correction amount of each action node is obtained based on the action nodes output in the previous stage.In this paper,the UPFC demonstration project of NanjingWest Ring Network is taken as a case study to validate the proposed method.The results show that the proposed method can fully meet the real-time security correction time requirements of power grids,and avoid the inherent defects of the traditional model method without an iterative solution and can also provide reasonable security correction strategies for N-1 and N-2 faults.
文摘Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems.
文摘Reactive power optimization of distribution networks is traditionally addressed by physical model based methods,which often lead to locally optimal solutions and require heavy online inference time consumption.To improve the quality of the solution and reduce the inference time burden,this paper proposes a new graph attention networks based method to directly map the complex nonlinear relationship between graphs(topology and power loads)and reactive power scheduling schemes of distribution networks,from a data-driven perspective.The graph attention network is tailored specifically to this problem and incorporates several innovative features such as a self-loop in the adjacency matrix,a customized loss function,and the use of max-pooling layers.Additionally,a rulebased strategy is proposed to adjust infeasible solutions that violate constraints.Simulation results on multiple distribution networks demonstrate that the proposed method outperforms other machine learning based methods in terms of the solution quality and robustness to varying load conditions.Moreover,its online inference time is significantly faster than traditional physical model based methods,particularly for large-scale distribution networks.
基金funded by the Natural Science Foundation of Shandong Province (ZR2021MD061ZR2023QD025)+3 种基金China Postdoctoral Science Foundation (2022M721972)National Natural Science Foundation of China (41174098)Young Talents Foundation of Inner Mongolia University (10000-23112101/055)Qingdao Postdoctoral Science Foundation (QDBSH20230102094)。
文摘Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.
文摘With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recognized as a powerful tool for nonlinear system modeling. To characterize the behavior of nonlinear circuits, a DNN based modeling approach is proposed in this paper. The procedure is illustrated by modeling a power amplifier (PA), which is a typical nonlinear circuit in electronic systems. The PA model is constructed based on a feedforward neural network with three hidden layers, and then Multisim circuit simulator is applied to generating the raw training data. Training and validation are carried out in Tensorflow deep learning framework. Compared with the commonly used polynomial model, the proposed DNN model exhibits a faster convergence rate and improves the mean squared error by 13 dB. The results demonstrate that the proposed DNN model can accurately depict the input-output characteristics of nonlinear circuits in both training and validation data sets.
基金supported in part by the Science and Technology Development Fund,Macao SAR,China(File no.SKL-IOTSC(UM)-2021-2023,File no.0003/2020/AKP,and File no.0011/2021/AGJ)。
文摘Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability of network topology and line impedance in many distribution networks,physical model-based methods may not be applicable to their operations.To tackle this challenge,some studies have proposed constraint learning,which replicates physical models by training a neural network to evaluate feasibility of a decision(i.e.,whether a decision satisfies all critical constraints or not).To ensure accuracy of this trained neural network,training set should contain sufficient feasible and infeasible samples.However,since ADNs are mostly operated in a normal status,only very few historical samples are infeasible.Thus,the historical dataset is highly imbalanced,which poses a significant obstacle to neural network training.To address this issue,we propose an enhanced constraint learning method.First,it leverages constraint learning to train a neural network as surrogate of ADN's model.Then,it introduces Synthetic Minority Oversampling Technique to generate infeasible samples to mitigate imbalance of historical dataset.By incorporating historical and synthetic samples into the training set,we can significantly improve accuracy of neural network.Furthermore,we establish a trust region to constrain and thereafter enhance reliability of the solution.Simulations confirm the benefits of the proposed method in achieving desirable optimality and feasibility while maintaining low computational complexity.
文摘Ensuring stability and reliability in power systems requires accurate state estimation, which is challenging due to the growing network size, noisy measurements, and nonlinear power-flow equations. In this paper, we introduce the Graph Attention Estimation Network (GAEN) model to tackle power system state estimation (PSSE) by capitalizing on the inherent graph structure of power grids. This approach facilitates efficient information exchange among interconnected buses, yielding a distributed, computationally efficient architecture that is also resilient to cyber-attacks. We develop a thorough approach by utilizing Graph Convolutional Neural Networks (GCNNs) and attention mechanism in PSSE based on Supervisory Control and Data Acquisition (SCADA) and Phasor Measurement Unit (PMU) measurements, addressing the limitations of previous learning architectures. In accordance with the empirical results obtained from the experiments, the proposed method demonstrates superior performance and scalability compared to existing techniques. Furthermore, the amalgamation of local topological configurations with nodal-level data yields a heightened efficacy in the domain of state estimation. This work marks a significant achievement in the design of advanced learning architectures in PSSE, contributing and fostering the development of more reliable and secure power system operations.
基金supported in part by the National Natural Science Foundation of China ( NSFC ) ( 11772093)ARC ( FT140101152)
文摘Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the details of coronary calcification in vivo.In routine clinical practice,it is a time-consuming and laborious task for clinicians to review the over 250 images in a single pullback.Besides,the imbalance label distribution within the entire pullbacks is another problem,which could lead to the failure of the classifier model.Given the success of deep learning methods with other imaging modalities,a thorough understanding of calcified plaque detection using Convolutional Neural Networks(CNNs)within pullbacks for future clinical decision was required.Methods All 33 IVOCT clinical pullbacks of 33 patients were taken from Affiliated Drum Tower Hospital,Nanjing University between December 2017 and December 2018.For ground-truth annotation,three trained experts determined the type of plaque that was present in a B-Scan.The experts assigned the labels'no calcified plaque','calcified plaque'for each OCT image.All experts were provided the all images for labeling.The final label was determined based on consensus between the experts,different opinions on the plaque type were resolved by asking the experts for a repetition of their evaluation.Before the implement of algorithm,all OCT images was resized to a resolution of 300×300,which matched the range used with standard architectures in the natural image domain.In the study,we randomly selected 26 pullbacks for training,the remaining data were testing.While,imbalance label distribution within entire pullbacks was great challenge for various CNNs architecture.In order to resolve the problem,we designed the following experiment.First,we fine-tuned twenty different CNNs architecture,including customize CNN architectures and pretrained CNN architectures.Considering the nature of OCT images,customize CNN architectures were designed that the layers were fewer than 25 layers.Then,three with good performance were selected and further deep fine-tuned to train three different models.The difference of CNNs was mainly in the model architecture,such as depth-based residual networks,width-based inception networks.Finally,the three CNN models were used to majority voting,the predicted labels were from the most voting.Areas under the receiver operating characteristic curve(ROC AUC)were used as the evaluation metric for the imbalance label distribution.Results The imbalance label distribution within pullbacks affected both convergence during the training phase and generalization of a CNN model.Different labels of OCT images could be classified with excellent performance by fine tuning parameters of CNN architectures.Overall,we find that our final result performed best with an accuracy of 90%of'calcified plaque'class,which the numbers were less than'no calcified plaque'class in one pullback.Conclusions The obtained results showed that the method is fast and effective to classify calcific plaques with imbalance label distribution in each pullback.The results suggest that the proposed method could be facilitating our understanding of coronary artery calcification in the process of atherosclerosis andhelping guide complex interventional strategies in coronary arteries with superficial calcification.
文摘Accurate photovoltaic(PV)power prediction can effectively help the power sector to make rational energy planning and dispatching decisions,promote PV consumption,make full use of renewable energy and alleviate energy problems.To address this research objective,this paper proposes a prediction model based on kernel principal component analysis(KPCA),modified cuckoo search algorithm(MCS)and deep convolutional neural networks(DCNN).Firstly,KPCA is utilized to reduce the dimension of the feature,which aims to reduce the redundant input vectors.Then using MCS to optimize the parameters of DCNN.Finally,the photovoltaic power forecasting method of KPCA-MCS-DCNN is established.In order to verify the prediction performance of the proposed model,this paper selects a photovoltaic power station in China for example analysis.The results show that the new hybrid KPCA-MCS-DCNN model has higher prediction accuracy and better robustness.
文摘In recent years,deep neural networks have become a fascinating and influential research subject,and they play a critical role in video processing and analytics.Since,video analytics are predominantly hardware centric,exploration of implementing the deep neural networks in the hardware needs its brighter light of research.However,the computational complexity and resource constraints of deep neural networks are increasing exponentially by time.Convolutional neural networks are one of the most popular deep learning architecture especially for image classification and video analytics.But these algorithms need an efficient implement strategy for incorporating more real time computations in terms of handling the videos in the hardware.Field programmable Gate arrays(FPGA)is thought to be more advantageous in implementing the convolutional neural networks when compared to Graphics Processing Unit(GPU)in terms of energy efficient and low computational complexity.But still,an intelligent architecture is required for implementing the CNN in FPGA for processing the videos.This paper introduces a modern high-performance,energy-efficient Bat Pruned Ensembled Convolutional networks(BPEC-CNN)for processing the video in the hardware.The system integrates the Bat Evolutionary Pruned layers for CNN and implements the new shared Distributed Filtering Structures(DFS)for handing the filter layers in CNN with pipelined data-path in FPGA.In addition,the proposed system adopts the hardware-software co-design methodology for an energy efficiency and less computational complexity.The extensive experimentations are carried out using CASIA video datasets with ARTIX-7 FPGA boards(number)and various algorithms centric parameters such as accuracy,sensitivity,specificity and architecture centric parameters such as the power,area and throughput are analyzed.These results are then compared with the existing pruned CNN architectures such as CNN-Prunner in which the proposed architecture has been shown 25%better performance than the existing architectures.
基金supported and granted by the Ministry of Science and Technology,Taiwan(MOST110-2622-E-390-001 and MOST109-2622-E-390-002-CC3).
文摘Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.