期刊文献+
共找到578篇文章
< 1 2 29 >
每页显示 20 50 100
The Short-Term Prediction ofWind Power Based on the Convolutional Graph Attention Deep Neural Network
1
作者 Fan Xiao Xiong Ping +4 位作者 Yeyang Li Yusen Xu Yiqun Kang Dan Liu Nianming Zhang 《Energy Engineering》 EI 2024年第2期359-376,共18页
The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key... The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident. 展开更多
关键词 Format wind power prediction deep neural network graph attention network attention mechanism quantile regression
下载PDF
Identification of Type of a Fault in Distribution System Using Shallow Neural Network with Distributed Generation
2
作者 Saurabh Awasthi Gagan Singh Nafees Ahamad 《Energy Engineering》 EI 2023年第4期811-829,共19页
A distributed generation system(DG)has several benefits over a traditional centralized power system.However,the protection area in the case of the distributed generator requires special attention as it encounters stab... A distributed generation system(DG)has several benefits over a traditional centralized power system.However,the protection area in the case of the distributed generator requires special attention as it encounters stability loss,failure re-closure,fluctuations in voltage,etc.And thereby,it demands immediate attention in identifying the location&type of a fault without delay especially when occurred in a small,distributed generation system,as it would adversely affect the overall system and its operation.In the past,several methods were proposed for classification and localisation of a fault in a distributed generation system.Many of those methods were accurate in identifying location,but the accuracy in identifying the type of fault was not up to the acceptable mark.The proposed work here uses a shallow artificial neural network(sANN)model for identifying a particular type of fault that could happen in a specific distribution network when used in conjunction with distributed generators.Firstly,a distribution network consisting of two similar distributed generators(DG1 and DG2),one grid,and a 100 Km distribution line is modeled.Thereafter,different voltages and currents corresponding to various faults(line to line,line to ground)at different locations are tabulated,resulting in a matrix of 500×18 inputs.Secondly,the sANN is formulated for identifying the types of faults in the system in which the above-obtained data is used to train,validate,and test the neural network.The overall result shows an unprecedented almost zero percent error in identifying the type of the faults. 展开更多
关键词 distribution network distributed generation power system modeling fault identification neural network renewable energy systems
下载PDF
Fully Distributed Learning for Deep Random Vector Functional-Link Networks
3
作者 Huada Zhu Wu Ai 《Journal of Applied Mathematics and Physics》 2024年第4期1247-1262,共16页
In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations a... In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 Distributed Optimization deep neural Network Random Vector Functional-Link (RVFL) Network Alternating Direction Method of Multipliers (ADMM)
下载PDF
Power Quality Improvement Using ANN Controller For Hybrid Power Distribution Systems
4
作者 Abdul Quawi Y.Mohamed Shuaib M.Manikandan 《Intelligent Automation & Soft Computing》 SCIE 2023年第6期3469-3486,共18页
In this work,an Artificial Neural Network(ANN)based technique is suggested for classifying the faults which occur in hybrid power distribution systems.Power,which is generated by the solar and wind energy-based hybrid... In this work,an Artificial Neural Network(ANN)based technique is suggested for classifying the faults which occur in hybrid power distribution systems.Power,which is generated by the solar and wind energy-based hybrid system,is given to the grid at the Point of Common Coupling(PCC).A boost converter along with perturb and observe(P&O)algorithm is utilized in this system to obtain a constant link voltage.In contrast,the link voltage of the wind energy conversion system(WECS)is retained with the assistance of a Proportional Integral(PI)controller.The grid synchronization is tainted with the assis-tance of the d-q theory.For the analysis of faults like islanding,line-ground,and line-line fault,the ANN is utilized.The voltage signal is observed at the PCC,and the Discrete Wavelet Transform(DWT)is employed to obtain different features.Based on the collected features,the ANN classifies the faults in an effi-cient manner.The simulation is done in MATLAB and the results are also validated through the hardware implementation.Detailed fault analysis is carried out and the results are compared with the existing techniques.Finally,the Total harmonic distortion(THD)is lessened by 4.3%by using the proposed methodology. 展开更多
关键词 Artificial neural network discrete wavelet transform hybrid power distribution system power quality power quality disturbances
下载PDF
Adaptive Butterfly Optimization Algorithm(ABOA)Based Feature Selection and Deep Neural Network(DNN)for Detection of Distributed Denial-of-Service(DDoS)Attacks in Cloud
5
作者 S.Sureshkumar G.K.D.Prasanna Venkatesan R.Santhosh 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期1109-1123,共15页
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz... Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches. 展开更多
关键词 Cloud computing distributed denial of service intrusion detection system adaptive butterfly optimization algorithm deep neural network
下载PDF
Review of Optical Character Recognition for Power System Image Based on Artificial Intelligence Algorithm
6
作者 Xun Zhang Wanrong Bai Haoyang Cui 《Energy Engineering》 EI 2023年第3期665-679,共15页
Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effe... Optical Character Recognition(OCR)refers to a technology that uses image processing technology and character recognition algorithms to identify characters on an image.This paper is a deep study on the recognition effect of OCR based on Artificial Intelligence(AI)algorithms,in which the different AI algorithms for OCR analysis are classified and reviewed.Firstly,the mechanisms and characteristics of artificial neural network-based OCR are summarized.Secondly,this paper explores machine learning-based OCR,and draws the conclusion that the algorithms available for this form of OCR are still in their infancy,with low generalization and fixed recognition errors,albeit with better recognition effect and higher recognition accuracy.Finally,this paper explores several of the latest algorithms such as deep learning and pattern recognition algorithms.This paper concludes that OCR requires algorithms with higher recognition accuracy. 展开更多
关键词 Optical character recognition artificial intelligence power system image artificial neural network machine leaning deep learning
下载PDF
Completeness Problem of the Deep Neural Networks
7
作者 Ying Liu Shaohui Wang 《American Journal of Computational Mathematics》 2018年第2期184-196,共13页
Hornik, Stinchcombe & White have shown that the multilayer feed forward networks with enough hidden layers are universal approximators. Roux & Bengio have proved that adding hidden units yield a strictly impro... Hornik, Stinchcombe & White have shown that the multilayer feed forward networks with enough hidden layers are universal approximators. Roux & Bengio have proved that adding hidden units yield a strictly improved modeling power, and Restricted Boltzmann Machines (RBM) are universal approximators of discrete distributions. In this paper, we provide yet another proof. The advantage of this new proof is that it will lead to several new learning algorithms. We prove that the Deep Neural Networks implement an expansion and the expansion is complete. First, we briefly review the basic Boltzmann Machine and that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θ-transformation;therefore, an ABM can simulate any distribution. We discuss how to convert an ABM into a Deep Neural Network. Finally, by establishing the equivalence between an ABM and the Deep Neural Network, we prove that the Deep Neural Network is complete. 展开更多
关键词 AI Universal APPROXIMATORS BOLTZMANN Machine MARKOV CHAIN INVARIANT distribution COMPLETENESS deep neural Network
下载PDF
Analytical Verification of Performance of Deep Neural Network Based Time-synchronized Distribution System State Estimation
8
作者 Behrouz Azimian Shiva Moshtagh +1 位作者 Anamitra Pal Shanshan Ma 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2024年第4期1126-1134,共9页
Recently,we demonstrated the success of a time-synchronized state estimator using deep neural networks(DNNs)for real-time unobservable distribution systems.In this paper,we provide analytical bounds on the performance... Recently,we demonstrated the success of a time-synchronized state estimator using deep neural networks(DNNs)for real-time unobservable distribution systems.In this paper,we provide analytical bounds on the performance of the state estimator as a function of perturbations in the input measurements.It has already been shown that evaluating performance based only on the test dataset might not effectively indicate the ability of a trained DNN to handle input perturbations.As such,we analytically verify the robustness and trustworthiness of DNNs to input perturbations by treating them as mixed-integer linear programming(MILP)problems.The ability of batch normalization in addressing the scalability limitations of the MILP formulation is also highlighted.The framework is validated by performing time-synchronized distribution system state estimation for a modified IEEE 34-node system and a real-world large distribution system,both of which are incompletely observed by micro-phasor measurement units. 展开更多
关键词 deep neural network(DNN) distribution system state estimation(DSSE) mixed-integer linear programming(MILP) ROBUSTNESS trustworthiness
原文传递
A Data Driven Security Correction Method for Power Systems with UPFC 被引量:1
9
作者 Qun Li Ningyu Zhang +2 位作者 Jianhua Zhou Xinyao Zhu Peng Li 《Energy Engineering》 EI 2023年第6期1485-1502,共18页
The access of unified power flow controllers(UPFC)has changed the structure and operation mode of power grids all across the world,and it has brought severe challenges to the traditional real-time calculation of secur... The access of unified power flow controllers(UPFC)has changed the structure and operation mode of power grids all across the world,and it has brought severe challenges to the traditional real-time calculation of security correction based on traditionalmodels.Considering the limitation of computational efficiency regarding complex,physical models,a data-driven power system security correction method with UPFC is,in this paper,proposed.Based on the complex mapping relationship between the operation state data and the security correction strategy,a two-stage deep neural network(DNN)learning framework is proposed,which divides the offline training task of security correction into two stages:in the first stage,the stacked auto-encoder(SAE)classification model is established,and the node correction state(0/1)output based on the fault information;in the second stage,the DNN learningmodel is established,and the correction amount of each action node is obtained based on the action nodes output in the previous stage.In this paper,the UPFC demonstration project of NanjingWest Ring Network is taken as a case study to validate the proposed method.The results show that the proposed method can fully meet the real-time security correction time requirements of power grids,and avoid the inherent defects of the traditional model method without an iterative solution and can also provide reasonable security correction strategies for N-1 and N-2 faults. 展开更多
关键词 MANUSCRIPT security correction data-driven deep neural network(DNN) unified power flow controller(UPFC) overload of transmission lines
下载PDF
Detecting and Mitigating DDOS Attacks in SDNs Using Deep Neural Network
10
作者 Gul Nawaz Muhammad Junaid +5 位作者 Adnan Akhunzada Abdullah Gani Shamyla Nawazish Asim Yaqub Adeel Ahmed Huma Ajab 《Computers, Materials & Continua》 SCIE EI 2023年第11期2157-2178,共22页
Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks... Distributed denial of service(DDoS)attack is the most common attack that obstructs a network and makes it unavailable for a legitimate user.We proposed a deep neural network(DNN)model for the detection of DDoS attacks in the Software-Defined Networking(SDN)paradigm.SDN centralizes the control plane and separates it from the data plane.It simplifies a network and eliminates vendor specification of a device.Because of this open nature and centralized control,SDN can easily become a victim of DDoS attacks.We proposed a supervised Developed Deep Neural Network(DDNN)model that can classify the DDoS attack traffic and legitimate traffic.Our Developed Deep Neural Network(DDNN)model takes a large number of feature values as compared to previously proposed Machine Learning(ML)models.The proposed DNN model scans the data to find the correlated features and delivers high-quality results.The model enhances the security of SDN and has better accuracy as compared to previously proposed models.We choose the latest state-of-the-art dataset which consists of many novel attacks and overcomes all the shortcomings and limitations of the existing datasets.Our model results in a high accuracy rate of 99.76%with a low false-positive rate and 0.065%low loss rate.The accuracy increases to 99.80%as we increase the number of epochs to 100 rounds.Our proposed model classifies anomalous and normal traffic more accurately as compared to the previously proposed models.It can handle a huge amount of structured and unstructured data and can easily solve complex problems. 展开更多
关键词 Distributed denial of service(DDoS)attacks software-defined networking(SDN) classification deep neural network(DNN)
下载PDF
Data-driven Reactive Power Optimization of Distribution Networks via Graph Attention Networks
11
作者 Wenlong Liao Dechang Yang +3 位作者 Qi Liu Yixiong Jia Chenxi Wang Zhe Yang 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2024年第3期874-885,共12页
Reactive power optimization of distribution networks is traditionally addressed by physical model based methods,which often lead to locally optimal solutions and require heavy online inference time consumption.To impr... Reactive power optimization of distribution networks is traditionally addressed by physical model based methods,which often lead to locally optimal solutions and require heavy online inference time consumption.To improve the quality of the solution and reduce the inference time burden,this paper proposes a new graph attention networks based method to directly map the complex nonlinear relationship between graphs(topology and power loads)and reactive power scheduling schemes of distribution networks,from a data-driven perspective.The graph attention network is tailored specifically to this problem and incorporates several innovative features such as a self-loop in the adjacency matrix,a customized loss function,and the use of max-pooling layers.Additionally,a rulebased strategy is proposed to adjust infeasible solutions that violate constraints.Simulation results on multiple distribution networks demonstrate that the proposed method outperforms other machine learning based methods in terms of the solution quality and robustness to varying load conditions.Moreover,its online inference time is significantly faster than traditional physical model based methods,particularly for large-scale distribution networks. 展开更多
关键词 Reactive power optimization graph neural network distribution network machine learning DATA-DRIVEN
原文传递
Deep learning CNN-APSO-LSSVM hybrid fusion model for feature optimization and gas-bearing prediction
12
作者 Jiu-Qiang Yang Nian-Tian Lin +3 位作者 Kai Zhang Yan Cui Chao Fu Dong Zhang 《Petroleum Science》 SCIE EI CAS CSCD 2024年第4期2329-2344,共16页
Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i... Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs. 展开更多
关键词 Multicomponent seismic data deep learning Adaptive particle swarm optimization Convolutional neural network Least squares support vector machine Feature optimization Gas-bearing distribution prediction
下载PDF
Deep Neural Network Based Behavioral Model of Nonlinear Circuits
13
作者 Zhe Jin Sekouba Kaba 《Journal of Applied Mathematics and Physics》 2021年第3期403-412,共10页
With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recogn... With the rapid growth of complexity and functionality of modern electronic systems, creating precise behavioral models of nonlinear circuits has become an attractive topic. Deep neural networks (DNNs) have been recognized as a powerful tool for nonlinear system modeling. To characterize the behavior of nonlinear circuits, a DNN based modeling approach is proposed in this paper. The procedure is illustrated by modeling a power amplifier (PA), which is a typical nonlinear circuit in electronic systems. The PA model is constructed based on a feedforward neural network with three hidden layers, and then Multisim circuit simulator is applied to generating the raw training data. Training and validation are carried out in Tensorflow deep learning framework. Compared with the commonly used polynomial model, the proposed DNN model exhibits a faster convergence rate and improves the mean squared error by 13 dB. The results demonstrate that the proposed DNN model can accurately depict the input-output characteristics of nonlinear circuits in both training and validation data sets. 展开更多
关键词 Nonlinear Circuits deep neural networks Behavioral Model power Amplifier
下载PDF
Constraint Learning-based Optimal Power Dispatch for Active Distribution Networks with Extremely Imbalanced Data
14
作者 Yonghua Song Ge Chen Hongcai Zhang 《CSEE Journal of Power and Energy Systems》 SCIE EI CSCD 2024年第1期51-65,共15页
Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability ... Transition towards carbon-neutral power systems has necessitated optimization of power dispatch in active distribution networks(ADNs)to facilitate integration of distributed renewable generation.Due to unavailability of network topology and line impedance in many distribution networks,physical model-based methods may not be applicable to their operations.To tackle this challenge,some studies have proposed constraint learning,which replicates physical models by training a neural network to evaluate feasibility of a decision(i.e.,whether a decision satisfies all critical constraints or not).To ensure accuracy of this trained neural network,training set should contain sufficient feasible and infeasible samples.However,since ADNs are mostly operated in a normal status,only very few historical samples are infeasible.Thus,the historical dataset is highly imbalanced,which poses a significant obstacle to neural network training.To address this issue,we propose an enhanced constraint learning method.First,it leverages constraint learning to train a neural network as surrogate of ADN's model.Then,it introduces Synthetic Minority Oversampling Technique to generate infeasible samples to mitigate imbalance of historical dataset.By incorporating historical and synthetic samples into the training set,we can significantly improve accuracy of neural network.Furthermore,we establish a trust region to constrain and thereafter enhance reliability of the solution.Simulations confirm the benefits of the proposed method in achieving desirable optimality and feasibility while maintaining low computational complexity. 展开更多
关键词 deep learning demand response distribution networks imbalanced data optimal power flow
原文传递
Boosting efficiency in state estimation of power systems by leveraging attention mechanism
15
作者 Elson Cibaku Fernando Gama SangWoo Park 《Energy and AI》 EI 2024年第2期438-449,共12页
Ensuring stability and reliability in power systems requires accurate state estimation, which is challenging due to the growing network size, noisy measurements, and nonlinear power-flow equations. In this paper, we i... Ensuring stability and reliability in power systems requires accurate state estimation, which is challenging due to the growing network size, noisy measurements, and nonlinear power-flow equations. In this paper, we introduce the Graph Attention Estimation Network (GAEN) model to tackle power system state estimation (PSSE) by capitalizing on the inherent graph structure of power grids. This approach facilitates efficient information exchange among interconnected buses, yielding a distributed, computationally efficient architecture that is also resilient to cyber-attacks. We develop a thorough approach by utilizing Graph Convolutional Neural Networks (GCNNs) and attention mechanism in PSSE based on Supervisory Control and Data Acquisition (SCADA) and Phasor Measurement Unit (PMU) measurements, addressing the limitations of previous learning architectures. In accordance with the empirical results obtained from the experiments, the proposed method demonstrates superior performance and scalability compared to existing techniques. Furthermore, the amalgamation of local topological configurations with nodal-level data yields a heightened efficacy in the domain of state estimation. This work marks a significant achievement in the design of advanced learning architectures in PSSE, contributing and fostering the development of more reliable and secure power system operations. 展开更多
关键词 power grids State estimation Attention mechanism Graph neural networks Distributed computation Grid cyber-security
原文传递
Automatic Calcified Plaques Detection in the OCT Pullbacks Using Convolutional Neural Networks 被引量:2
16
作者 Chunliu He Yifan Yin +2 位作者 Jiaqiu Wang Biao Xu Zhiyong Li 《医用生物力学》 EI CAS CSCD 北大核心 2019年第A01期109-110,共2页
Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the detai... Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the details of coronary calcification in vivo.In routine clinical practice,it is a time-consuming and laborious task for clinicians to review the over 250 images in a single pullback.Besides,the imbalance label distribution within the entire pullbacks is another problem,which could lead to the failure of the classifier model.Given the success of deep learning methods with other imaging modalities,a thorough understanding of calcified plaque detection using Convolutional Neural Networks(CNNs)within pullbacks for future clinical decision was required.Methods All 33 IVOCT clinical pullbacks of 33 patients were taken from Affiliated Drum Tower Hospital,Nanjing University between December 2017 and December 2018.For ground-truth annotation,three trained experts determined the type of plaque that was present in a B-Scan.The experts assigned the labels'no calcified plaque','calcified plaque'for each OCT image.All experts were provided the all images for labeling.The final label was determined based on consensus between the experts,different opinions on the plaque type were resolved by asking the experts for a repetition of their evaluation.Before the implement of algorithm,all OCT images was resized to a resolution of 300×300,which matched the range used with standard architectures in the natural image domain.In the study,we randomly selected 26 pullbacks for training,the remaining data were testing.While,imbalance label distribution within entire pullbacks was great challenge for various CNNs architecture.In order to resolve the problem,we designed the following experiment.First,we fine-tuned twenty different CNNs architecture,including customize CNN architectures and pretrained CNN architectures.Considering the nature of OCT images,customize CNN architectures were designed that the layers were fewer than 25 layers.Then,three with good performance were selected and further deep fine-tuned to train three different models.The difference of CNNs was mainly in the model architecture,such as depth-based residual networks,width-based inception networks.Finally,the three CNN models were used to majority voting,the predicted labels were from the most voting.Areas under the receiver operating characteristic curve(ROC AUC)were used as the evaluation metric for the imbalance label distribution.Results The imbalance label distribution within pullbacks affected both convergence during the training phase and generalization of a CNN model.Different labels of OCT images could be classified with excellent performance by fine tuning parameters of CNN architectures.Overall,we find that our final result performed best with an accuracy of 90%of'calcified plaque'class,which the numbers were less than'no calcified plaque'class in one pullback.Conclusions The obtained results showed that the method is fast and effective to classify calcific plaques with imbalance label distribution in each pullback.The results suggest that the proposed method could be facilitating our understanding of coronary artery calcification in the process of atherosclerosis andhelping guide complex interventional strategies in coronary arteries with superficial calcification. 展开更多
关键词 CALCIFIED PLAQUE INTRAVASCULAR optical coherence tomography deep learning IMBALANCE LABEL distribution convolutional neural networks
下载PDF
Forecasting Model of Photovoltaic Power Based on KPCA-MCS-DCNN 被引量:1
17
作者 Huizhi Gou Yuncai Ning 《Computer Modeling in Engineering & Sciences》 SCIE EI 2021年第8期803-822,共20页
Accurate photovoltaic(PV)power prediction can effectively help the power sector to make rational energy planning and dispatching decisions,promote PV consumption,make full use of renewable energy and alleviate energy ... Accurate photovoltaic(PV)power prediction can effectively help the power sector to make rational energy planning and dispatching decisions,promote PV consumption,make full use of renewable energy and alleviate energy problems.To address this research objective,this paper proposes a prediction model based on kernel principal component analysis(KPCA),modified cuckoo search algorithm(MCS)and deep convolutional neural networks(DCNN).Firstly,KPCA is utilized to reduce the dimension of the feature,which aims to reduce the redundant input vectors.Then using MCS to optimize the parameters of DCNN.Finally,the photovoltaic power forecasting method of KPCA-MCS-DCNN is established.In order to verify the prediction performance of the proposed model,this paper selects a photovoltaic power station in China for example analysis.The results show that the new hybrid KPCA-MCS-DCNN model has higher prediction accuracy and better robustness. 展开更多
关键词 Photovoltaic power prediction kernel principal component analysis modified cuckoo search algorithm deep convolutional neural networks
下载PDF
FPGA Implementation of Deep Leaning Model for Video Analytics
18
作者 P.N.Palanisamy N.Malmurugan 《Computers, Materials & Continua》 SCIE EI 2022年第4期791-808,共18页
In recent years,deep neural networks have become a fascinating and influential research subject,and they play a critical role in video processing and analytics.Since,video analytics are predominantly hardware centric,... In recent years,deep neural networks have become a fascinating and influential research subject,and they play a critical role in video processing and analytics.Since,video analytics are predominantly hardware centric,exploration of implementing the deep neural networks in the hardware needs its brighter light of research.However,the computational complexity and resource constraints of deep neural networks are increasing exponentially by time.Convolutional neural networks are one of the most popular deep learning architecture especially for image classification and video analytics.But these algorithms need an efficient implement strategy for incorporating more real time computations in terms of handling the videos in the hardware.Field programmable Gate arrays(FPGA)is thought to be more advantageous in implementing the convolutional neural networks when compared to Graphics Processing Unit(GPU)in terms of energy efficient and low computational complexity.But still,an intelligent architecture is required for implementing the CNN in FPGA for processing the videos.This paper introduces a modern high-performance,energy-efficient Bat Pruned Ensembled Convolutional networks(BPEC-CNN)for processing the video in the hardware.The system integrates the Bat Evolutionary Pruned layers for CNN and implements the new shared Distributed Filtering Structures(DFS)for handing the filter layers in CNN with pipelined data-path in FPGA.In addition,the proposed system adopts the hardware-software co-design methodology for an energy efficiency and less computational complexity.The extensive experimentations are carried out using CASIA video datasets with ARTIX-7 FPGA boards(number)and various algorithms centric parameters such as accuracy,sensitivity,specificity and architecture centric parameters such as the power,area and throughput are analyzed.These results are then compared with the existing pruned CNN architectures such as CNN-Prunner in which the proposed architecture has been shown 25%better performance than the existing architectures. 展开更多
关键词 deep neural networks field programmable gate arrays convolutional neural networks distributed filtering structures bat-pruned
下载PDF
Optimizing Big Data Retrieval and Job Scheduling Using Deep Learning Approaches
19
作者 Bao Rong Chang Hsiu-Fen Tsai Yu-Chieh Lin 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第2期783-815,共33页
Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability... Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively. 展开更多
关键词 Stacked sparse autoencoder Elasticsearch distributed indexing data retrieval deep neural network job scheduling
下载PDF
基于深度卷积神经网络的电力系统故障预测
20
作者 朱燕芳 闫磊 +3 位作者 常康 赵文娜 李远 徐利美 《电源学报》 CSCD 北大核心 2024年第S01期179-185,共7页
通过对深度卷积神经网络的深入研究,提出基于深度卷积神经网络的电力系统故障预测方法,保障系统安全运行。采用广域测量系统测量每个支路与节点,将获得的功率与关键特征值分别作为深度卷积神经网络模型输入、输出,训练这2个数据,并使用... 通过对深度卷积神经网络的深入研究,提出基于深度卷积神经网络的电力系统故障预测方法,保障系统安全运行。采用广域测量系统测量每个支路与节点,将获得的功率与关键特征值分别作为深度卷积神经网络模型输入、输出,训练这2个数据,并使用深度卷积神经网络AlexNet分析输入数据与输出数据的映射关系,建立基于深度卷积神经网络的电力系统故障预测模型,通过特征值分组、振荡模式筛选、数据预处理、模型训练和模型评估,实现电力系统运行状态评估,完成电力系统故障预测。实验结果说明:该方法的关键特征值计算结果与实际结果基本一致,可靠性高;使用正则化可提升模型泛化效果,防止模型过拟合;与其余方法的准确率和误报率指标相比,所提方法的准确率高达99.52%,误报率为1.16%,综合评价指标较高,评估效果优势显著。 展开更多
关键词 深度卷积 神经网络 电力系统 故障预测 AlexNet
下载PDF
上一页 1 2 29 下一页 到第
使用帮助 返回顶部