期刊文献+
共找到2,531篇文章
< 1 2 127 >
每页显示 20 50 100
A Novel Hybrid Ensemble Learning Approach for Enhancing Accuracy and Sustainability in Wind Power Forecasting
1
作者 Farhan Ullah Xuexia Zhang +2 位作者 Mansoor Khan Muhammad Abid Abdullah Mohamed 《Computers, Materials & Continua》 SCIE EI 2024年第5期3373-3395,共23页
Accurate wind power forecasting is critical for system integration and stability as renewable energy reliance grows.Traditional approaches frequently struggle with complex data and non-linear connections. This article... Accurate wind power forecasting is critical for system integration and stability as renewable energy reliance grows.Traditional approaches frequently struggle with complex data and non-linear connections. This article presentsa novel approach for hybrid ensemble learning that is based on rigorous requirements engineering concepts.The approach finds significant parameters influencing forecasting accuracy by evaluating real-time Modern-EraRetrospective Analysis for Research and Applications (MERRA2) data from several European Wind farms usingin-depth stakeholder research and requirements elicitation. Ensemble learning is used to develop a robust model,while a temporal convolutional network handles time-series complexities and data gaps. The ensemble-temporalneural network is enhanced by providing different input parameters including training layers, hidden and dropoutlayers along with activation and loss functions. The proposed framework is further analyzed by comparing stateof-the-art forecasting models in terms of Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE),respectively. The energy efficiency performance indicators showed that the proposed model demonstrates errorreduction percentages of approximately 16.67%, 28.57%, and 81.92% for MAE, and 38.46%, 17.65%, and 90.78%for RMSE for MERRAWind farms 1, 2, and 3, respectively, compared to other existingmethods. These quantitativeresults show the effectiveness of our proposed model with MAE values ranging from 0.0010 to 0.0156 and RMSEvalues ranging from 0.0014 to 0.0174. This work highlights the effectiveness of requirements engineering in windpower forecasting, leading to enhanced forecast accuracy and grid stability, ultimately paving the way for moresustainable energy solutions. 展开更多
关键词 Ensemble learning machine learning real-time data analysis stakeholder analysis temporal convolutional network wind power forecasting
下载PDF
Reinforcement Learning-Based Energy Management for Hybrid Power Systems:State-of-the-Art Survey,Review,and Perspectives
2
作者 Xiaolin Tang Jiaxin Chen +4 位作者 Yechen Qin Teng Liu Kai Yang Amir Khajepour Shen Li 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第3期1-25,共25页
The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art ... The new energy vehicle plays a crucial role in green transportation,and the energy management strategy of hybrid power systems is essential for ensuring energy-efficient driving.This paper presents a state-of-the-art survey and review of reinforcement learning-based energy management strategies for hybrid power systems.Additionally,it envisions the outlook for autonomous intelligent hybrid electric vehicles,with reinforcement learning as the foundational technology.First of all,to provide a macro view of historical development,the brief history of deep learning,reinforcement learning,and deep reinforcement learning is presented in the form of a timeline.Then,the comprehensive survey and review are conducted by collecting papers from mainstream academic databases.Enumerating most of the contributions based on three main directions—algorithm innovation,powertrain innovation,and environment innovation—provides an objective review of the research status.Finally,to advance the application of reinforcement learning in autonomous intelligent hybrid electric vehicles,future research plans positioned as“Alpha HEV”are envisioned,integrating Autopilot and energy-saving control. 展开更多
关键词 New energy vehicle Hybrid power system Reinforcement learning Energy management strategy
下载PDF
Deep Learning-Based Secure Transmission Strategy with Sensor-Transmission-Computing Linkage for Power Internet of Things
3
作者 Bin Li Linghui Kong +3 位作者 Xiangyi Zhang Bochuo Kou Hui Yu Bowen Liu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3267-3282,共16页
The automatic collection of power grid situation information, along with real-time multimedia interaction between the front and back ends during the accident handling process, has generated a massive amount of power g... The automatic collection of power grid situation information, along with real-time multimedia interaction between the front and back ends during the accident handling process, has generated a massive amount of power grid data. While wireless communication offers a convenient channel for grid terminal access and data transmission, it is important to note that the bandwidth of wireless communication is limited. Additionally, the broadcast nature of wireless transmission raises concerns about the potential for unauthorized eavesdropping during data transmission. To address these challenges and achieve reliable, secure, and real-time transmission of power grid data, an intelligent security transmission strategy with sensor-transmission-computing linkage is proposed in this paper. The primary objective of this strategy is to maximize the confidentiality capacity of the system. To tackle this, an optimization problem is formulated, taking into consideration interruption probability and interception probability as constraints. To efficiently solve this optimization problem, a low-complexity algorithm rooted in deep reinforcement learning is designed, which aims to derive a suboptimal solution for the problem at hand. Ultimately, through simulation results, the validity of the proposed strategy in guaranteed communication security, stability, and timeliness is substantiated. The results confirm that the proposed intelligent security transmission strategy significantly contributes to the safeguarding of communication integrity, system stability, and timely data delivery. 展开更多
关键词 Secure transmission deep learning power Internet of Things sensor-transmission-computing
下载PDF
Safety-Constrained Multi-Agent Reinforcement Learning for Power Quality Control in Distributed Renewable Energy Networks
4
作者 Yongjiang Zhao Haoyi Zhong Chang Cyoon Lim 《Computers, Materials & Continua》 SCIE EI 2024年第4期449-471,共23页
This paper examines the difficulties of managing distributed power systems,notably due to the increasing use of renewable energy sources,and focuses on voltage control challenges exacerbated by their variable nature i... This paper examines the difficulties of managing distributed power systems,notably due to the increasing use of renewable energy sources,and focuses on voltage control challenges exacerbated by their variable nature in modern power grids.To tackle the unique challenges of voltage control in distributed renewable energy networks,researchers are increasingly turning towards multi-agent reinforcement learning(MARL).However,MARL raises safety concerns due to the unpredictability in agent actions during their exploration phase.This unpredictability can lead to unsafe control measures.To mitigate these safety concerns in MARL-based voltage control,our study introduces a novel approach:Safety-ConstrainedMulti-Agent Reinforcement Learning(SC-MARL).This approach incorporates a specialized safety constraint module specifically designed for voltage control within the MARL framework.This module ensures that the MARL agents carry out voltage control actions safely.The experiments demonstrate that,in the 33-buses,141-buses,and 322-buses power systems,employing SC-MARL for voltage control resulted in a reduction of the Voltage Out of Control Rate(%V.out)from0.43,0.24,and 2.95 to 0,0.01,and 0.03,respectively.Additionally,the Reactive Power Loss(Q loss)decreased from 0.095,0.547,and 0.017 to 0.062,0.452,and 0.016 in the corresponding systems. 展开更多
关键词 power quality control multi-agent reinforcement learning safety-constrained MARL
下载PDF
Intelligent Power Grid Load Transferring Based on Safe Action-Correction Reinforcement Learning
5
作者 Fuju Zhou Li Li +3 位作者 Tengfei Jia Yongchang Yin Aixiang Shi Shengrong Xu 《Energy Engineering》 EI 2024年第6期1697-1711,共15页
When a line failure occurs in a power grid, a load transfer is implemented to reconfigure the network by changingthe states of tie-switches and load demands. Computation speed is one of the major performance indicator... When a line failure occurs in a power grid, a load transfer is implemented to reconfigure the network by changingthe states of tie-switches and load demands. Computation speed is one of the major performance indicators inpower grid load transfer, as a fast load transfer model can greatly reduce the economic loss of post-fault powergrids. In this study, a reinforcement learning method is developed based on a deep deterministic policy gradient.The tedious training process of the reinforcement learning model can be conducted offline, so the model showssatisfactory performance in real-time operation, indicating that it is suitable for fast load transfer. Consideringthat the reinforcement learning model performs poorly in satisfying safety constraints, a safe action-correctionframework is proposed to modify the learning model. In the framework, the action of load shedding is correctedaccording to sensitivity analysis results under a small discrete increment so as to match the constraints of line flowlimits. The results of case studies indicate that the proposed method is practical for fast and safe power grid loadtransfer. 展开更多
关键词 Load transfer reinforcement learning electrical power grid safety constraints
下载PDF
A Deep Reinforcement Learning-Based Technique for Optimal Power Allocation in Multiple Access Communications
6
作者 Sepehr Soltani Ehsan Ghafourian +2 位作者 Reza Salehi Diego Martín Milad Vahidi 《Intelligent Automation & Soft Computing》 2024年第1期93-108,共16页
Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning method... Formany years,researchers have explored power allocation(PA)algorithms driven bymodels in wireless networks where multiple-user communications with interference are present.Nowadays,data-driven machine learning methods have become quite popular in analyzing wireless communication systems,which among them deep reinforcement learning(DRL)has a significant role in solving optimization issues under certain constraints.To this purpose,in this paper,we investigate the PA problem in a k-user multiple access channels(MAC),where k transmitters(e.g.,mobile users)aim to send an independent message to a common receiver(e.g.,base station)through wireless channels.To this end,we first train the deep Q network(DQN)with a deep Q learning(DQL)algorithm over the simulation environment,utilizing offline learning.Then,the DQN will be used with the real data in the online training method for the PA issue by maximizing the sumrate subjected to the source power.Finally,the simulation results indicate that our proposedDQNmethod provides better performance in terms of the sumrate compared with the available DQL training approaches such as fractional programming(FP)and weighted minimum mean squared error(WMMSE).Additionally,by considering different user densities,we show that our proposed DQN outperforms benchmark algorithms,thereby,a good generalization ability is verified over wireless multi-user communication systems. 展开更多
关键词 Deep reinforcement learning deep Q learning multiple access channel power allocation
下载PDF
Learning-based user association and dynamic resource allocation in multi-connectivity enabled unmanned aerial vehicle networks
7
作者 Zhipeng Cheng Minghui Liwang +3 位作者 Ning Chen Lianfen Huang Nadra Guizani Xiaojiang Du 《Digital Communications and Networks》 SCIE CSCD 2024年第1期53-62,共10页
Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can ... Unmanned Aerial Vehicles(UAvs)as aerial base stations to provide communication services for ground users is a flexible and cost-effective paradigm in B5G.Besides,dynamic resource allocation and multi-connectivity can be adopted to further harness the potentials of UAVs in improving communication capacity,in such situations such that the interference among users becomes a pivotal disincentive requiring effective solutions.To this end,we investigate the Joint UAV-User Association,Channel Allocation,and transmission Power Control(J-UACAPC)problem in a multi-connectivity-enabled UAV network with constrained backhaul links,where each UAV can determine the reusable channels and transmission power to serve the selected ground users.The goal was to mitigate co-channel interference while maximizing long-term system utility.The problem was modeled as a cooperative stochastic game with hybrid discrete-continuous action space.A Multi-Agent Hybrid Deep Reinforcement Learning(MAHDRL)algorithm was proposed to address this problem.Extensive simulation results demonstrated the effectiveness of the proposed algorithm and showed that it has a higher system utility than the baseline methods. 展开更多
关键词 UAV-user association Multi-connectivity Resource allocation power control Multi-agent deep reinforcement learning
下载PDF
Sparse Adversarial Learning for FDIA Attack Sample Generation in Distributed Smart
8
作者 Fengyong Li Weicheng Shen +1 位作者 Zhongqin Bi Xiangjing Su 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2095-2115,共21页
False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural ... False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability. 展开更多
关键词 Distributed smart grid FDIA adversarial learning power public-private network edge
下载PDF
Power Prediction of VLSI Circuits Using Machine Learning 被引量:1
9
作者 E.Poovannan S.Karthik 《Computers, Materials & Continua》 SCIE EI 2023年第1期2161-2177,共17页
The difference between circuit design stage and time requirements has broadened with the increasing complexity of the circuit.A big database is needed to undertake important analytical work like statistical method,hea... The difference between circuit design stage and time requirements has broadened with the increasing complexity of the circuit.A big database is needed to undertake important analytical work like statistical method,heat research,and IR-drop research that results in extended running times.This unit focuses on the assessment of test strength.Because of the enormous number of successful designs for currentmodels and the unnecessary time required for every test,maximum energy ratings with all tests cannot be achieved.Nevertheless,test safety is important for producing trustworthy findings to avoid loss of output and harm to the chip.Generally,effective power assessment is only possible in a limited sample of pre-selected experiments.Thus,a key objective is to find the experiments that might give the worst situations again for testing power.It offers a machine-based circuit power estimation(MLCPE)system for the selection of exams.Two distinct techniques of predicting are utilized.Firstly,to find testings with power dissipation,it forecasts the behavior of testing.Secondly,the changemovement and energy data are linked to the semiconductor design,identifying small problem areas.Several types of algorithms are utilized.In particular,the methods compared.The findings show great accuracy and efficiency in forecasting.That enables such methods suitable for selecting the worst scenario. 展开更多
关键词 power estimation Machine learning circuit simulation VLSI implementation
下载PDF
Enhancing Iterative Learning Control With Fractional Power Update Law 被引量:1
10
作者 Zihan Li Dong Shen Xinghuo Yu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第5期1137-1149,共13页
The P-type update law has been the mainstream technique used in iterative learning control(ILC)systems,which resembles linear feedback control with asymptotical convergence.In recent years,finite-time control strategi... The P-type update law has been the mainstream technique used in iterative learning control(ILC)systems,which resembles linear feedback control with asymptotical convergence.In recent years,finite-time control strategies such as terminal sliding mode control have been shown to be effective in ramping up convergence speed by introducing fractional power with feedback.In this paper,we show that such mechanism can equally ramp up the learning speed in ILC systems.We first propose a fractional power update rule for ILC of single-input-single-output linear systems.A nonlinear error dynamics is constructed along the iteration axis to illustrate the evolutionary converging process.Using the nonlinear mapping approach,fast convergence towards the limit cycles of tracking errors inherently existing in ILC systems is proven.The limit cycles are shown to be tunable to determine the steady states.Numerical simulations are provided to verify the theoretical results. 展开更多
关键词 Asymptotic convergence convergence rate finiteiteration tracking fractional power learning rule limit cycles
下载PDF
Power Transformer Fault Diagnosis Using Random Forest and Optimized Kernel Extreme Learning Machine 被引量:1
11
作者 Tusongjiang Kari Zhiyang He +3 位作者 Aisikaer Rouzi Ziwei Zhang Xiaojing Ma Lin Du 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期691-705,共15页
Power transformer is one of the most crucial devices in power grid.It is significant to determine incipient faults of power transformers fast and accurately.Input features play critical roles in fault diagnosis accura... Power transformer is one of the most crucial devices in power grid.It is significant to determine incipient faults of power transformers fast and accurately.Input features play critical roles in fault diagnosis accuracy.In order to further improve the fault diagnosis performance of power trans-formers,a random forest feature selection method coupled with optimized kernel extreme learning machine is presented in this study.Firstly,the random forest feature selection approach is adopted to rank 42 related input features derived from gas concentration,gas ratio and energy-weighted dissolved gas analysis.Afterwards,a kernel extreme learning machine tuned by the Aquila optimization algorithm is implemented to adjust crucial parameters and select the optimal feature subsets.The diagnosis accuracy is used to assess the fault diagnosis capability of concerned feature subsets.Finally,the optimal feature subsets are applied to establish fault diagnosis model.According to the experimental results based on two public datasets and comparison with 5 conventional approaches,it can be seen that the average accuracy of the pro-posed method is up to 94.5%,which is superior to that of other conventional approaches.Fault diagnosis performances verify that the optimum feature subset obtained by the presented method can dramatically improve power transformers fault diagnosis accuracy. 展开更多
关键词 power transformer fault diagnosis kernel extreme learning machine aquila optimization random forest
下载PDF
Semi-asynchronous personalized federated learning for short-term photovoltaic power forecasting
12
作者 Weishan Zhang Xiao Chen +4 位作者 Ke He Leiming Chen Liang Xu Xiao Wang Su Yang 《Digital Communications and Networks》 SCIE CSCD 2023年第5期1221-1229,共9页
Accurate forecasting for photovoltaic power generation is one of the key enablers for the integration of solar photovoltaic systems into power grids.Existing deep-learning-based methods can perform well if there are s... Accurate forecasting for photovoltaic power generation is one of the key enablers for the integration of solar photovoltaic systems into power grids.Existing deep-learning-based methods can perform well if there are sufficient training data and enough computational resources.However,there are challenges in building models through centralized shared data due to data privacy concerns and industry competition.Federated learning is a new distributed machine learning approach which enables training models across edge devices while data reside locally.In this paper,we propose an efficient semi-asynchronous federated learning framework for short-term solar power forecasting and evaluate the framework performance using a CNN-LSTM model.We design a personalization technique and a semi-asynchronous aggregation strategy to improve the efficiency of the proposed federated forecasting approach.Thorough evaluations using a real-world dataset demonstrate that the federated models can achieve significantly higher forecasting performance than fully local models while protecting data privacy,and the proposed semi-asynchronous aggregation and the personalization technique can make the forecasting framework more robust in real-world scenarios. 展开更多
关键词 Photovoltaic power forecasting Federated learning Edge computing CNN-LSTM
下载PDF
Spotted Hyena-Bat Optimized Extreme Learning Machine for Solar Power Extraction
13
作者 K.Madumathi S.Chandrasekar 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1821-1836,共16页
Artificial intelligence,machine learning and deep learning algorithms have been widely used for Maximum Power Point Tracking(MPPT)in solar systems.In the traditional MPPT strategies,following of worldwide Global Maxim... Artificial intelligence,machine learning and deep learning algorithms have been widely used for Maximum Power Point Tracking(MPPT)in solar systems.In the traditional MPPT strategies,following of worldwide Global Maximum Power Point(GMPP)under incomplete concealing conditions stay overwhelming assignment and tracks different nearby greatest power focuses under halfway concealing conditions.The advent of artificial intelligence in MPPT has guaranteed of accurate following of GMPP while expanding the significant performance and efficiency of MPPT under Partial Shading Conditions(PSC).Still the selection of an efficient learning based MPPT is complex because each model has its advantages and drawbacks.Recently,Meta-heuristic algorithm based Learning techniques have provided better tracking efficiency but still exhibit dull performances under PSC.This work represents an excellent optimization based on Spotted Hyena Enabled Reliable BAT(SHERB)learning models,SHERB-MPPT integrated with powerful extreme learning machines to identify the GMPP with fast convergence,low steady-state oscillations,and good tracking efficiency.Extensive testing using MATLAB-SIMULINK,with 50000 data combinations gathered under partial shade and normal settings.As a result of simulations,the proposed approach offers 99.7%tracking efficiency with a slower convergence speed.To demonstrate the predominance of the proposed system,we have compared the performance of the system with other hybrid MPPT learning models.Results proved that the proposed cross breed MPPT model had beaten different techniques in recognizing GMPP viably under fractional concealing conditions. 展开更多
关键词 Global maximum power point tracking artificial intelligence machine learning deep learning spotted hyena-BAT algorithm
下载PDF
Power Information System Database Cache Model Based on Deep Machine Learning
14
作者 Manjiang Xing 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期1081-1090,共10页
At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems ba... At present,the database cache model of power information system has problems such as slow running speed and low database hit rate.To this end,this paper proposes a database cache model for power information systems based on deep machine learning.The caching model includes program caching,Structured Query Language(SQL)preprocessing,and core caching modules.Among them,the method to improve the efficiency of the statement is to adjust operations such as multi-table joins and replacement keywords in the SQL optimizer.Build predictive models using boosted regression trees in the core caching module.Generate a series of regression tree models using machine learning algorithms.Analyze the resource occupancy rate in the power information system to dynamically adjust the voting selection of the regression tree.At the same time,the voting threshold of the prediction model is dynamically adjusted.By analogy,the cache model is re-initialized.The experimental results show that the model has a good cache hit rate and cache efficiency,and can improve the data cache performance of the power information system.It has a high hit rate and short delay time,and always maintains a good hit rate even under different computer memory;at the same time,it only occupies less space and less CPU during actual operation,which is beneficial to power The information system operates efficiently and quickly. 展开更多
关键词 Deep machine learning power information system DATABASE cache model
下载PDF
Reactive Power Flow Convergence Adjustment Based on Deep Reinforcement Learning
15
作者 Wei Zhang Bin Ji +3 位作者 Ping He Nanqin Wang Yuwei Wang Mengzhe Zhang 《Energy Engineering》 EI 2023年第9期2177-2192,共16页
Power flow calculation is the basis of power grid planning and many system analysis tasks require convergent power flow conditions.To address the unsolvable power flow problem caused by the reactive power imbalance,a ... Power flow calculation is the basis of power grid planning and many system analysis tasks require convergent power flow conditions.To address the unsolvable power flow problem caused by the reactive power imbalance,a method for adjusting reactive power flow convergence based on deep reinforcement learning is proposed.The deep reinforcement learning method takes switching parallel reactive compensation as the action space and sets the reward value based on the power flow convergence and reactive power adjustment.For the non-convergence power flow,the 500 kV nodes with reactive power compensation devices on the low-voltage side are converted into PV nodes by node type switching.And the quantified reactive power non-convergence index is acquired.Then,the action space and reward value of deep reinforcement learning are reasonably designed and the adjustment strategy is obtained by taking the reactive power non-convergence index as the algorithm state space.Finally,the effectiveness of the power flow convergence adjustment algorithm is verified by an actual power grid system in a province. 展开更多
关键词 power flow calculation reactive power flow convergence node type switching deep reinforcement learning
下载PDF
A Deep Reinforcement Learning-Based Power Control Scheme for the 5G Wireless Systems
16
作者 Renjie Liang Haiyang Lyu Jiancun Fan 《China Communications》 SCIE CSCD 2023年第10期109-119,共11页
In the fifth generation(5G)wireless system,a closed-loop power control(CLPC)scheme based on deep Q learning network(DQN)is introduced to intelligently adjust the transmit power of the base station(BS),which can improv... In the fifth generation(5G)wireless system,a closed-loop power control(CLPC)scheme based on deep Q learning network(DQN)is introduced to intelligently adjust the transmit power of the base station(BS),which can improve the user equipment(UE)received signal to interference plus noise ratio(SINR)to a target threshold range.However,the selected power control(PC)action in DQN is not accurately matched the fluctuations of the wireless environment.Since the experience replay characteristic of the conventional DQN scheme leads to a possibility of insufficient training in the target deep neural network(DNN).As a result,the Q-value of the sub-optimal PC action exceed the optimal one.To solve this problem,we propose the improved DQN scheme.In the proposed scheme,we add an additional DNN to the conventional DQN,and set a shorter training interval to speed up the training of the DNN in order to fully train it.Finally,the proposed scheme can ensure that the Q value of the optimal action remains maximum.After multiple episodes of training,the proposed scheme can generate more accurate PC actions to match the fluctuations of the wireless environment.As a result,the UE received SINR can achieve the target threshold range faster and keep more stable.The simulation results prove that the proposed scheme outperforms the conventional schemes. 展开更多
关键词 reinforcement learning closed-loop power control(CLPC) signal-to-interference-plusnoise ratio(SINR)
下载PDF
Reliable Scheduling Method for Sensitive Power Business Based on Deep Reinforcement Learning
17
作者 Shen Guo Jiaying Lin +2 位作者 Shuaitao Bai Jichuan Zhang Peng Wang 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期1053-1066,共14页
The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services r... The main function of the power communication business is to monitor,control and manage the power communication network to ensure normal and stable operation of the power communication network.Commu-nication services related to dispatching data networks and the transmission of fault information or feeder automation have high requirements for delay.If processing time is prolonged,a power business cascade reaction may be triggered.In order to solve the above problems,this paper establishes an edge object-linked agent business deployment model for power communication network to unify the management of data collection,resource allocation and task scheduling within the system,realizes the virtualization of object-linked agent computing resources through Docker container technology,designs the target model of network latency and energy consumption,and introduces A3C algorithm in deep reinforcement learning,improves it according to scene characteristics,and sets corresponding optimization strategies.Mini-mize network delay and energy consumption;At the same time,to ensure that sensitive power business is handled in time,this paper designs the business dispatch model and task migration model,and solves the problem of server failure.Finally,the corresponding simulation program is designed to verify the feasibility and validity of this method,and to compare it with other existing mechanisms. 展开更多
关键词 power communication network dispatching data networks resource allocation A3C algorithm deep reinforcement learning
下载PDF
Optimal Placement and Sizing of Distributed Generations for Power Losses Minimization Using PSO-Based Deep Learning Techniques
18
作者 Bello-Pierre Ngoussandou Nicodem Nisso +1 位作者 Dieudonné Kaoga Kidmo   Kitmo 《Smart Grid and Renewable Energy》 2023年第9期169-181,共13页
The integration of distributed generations (DGs) into distribution systems (DSs) is increasingly becoming a solution for compensating for isolated local energy systems (ILESs). Additionally, distributed generations ar... The integration of distributed generations (DGs) into distribution systems (DSs) is increasingly becoming a solution for compensating for isolated local energy systems (ILESs). Additionally, distributed generations are used for self-consumption with excess energy injected into centralized grids (CGs). However, the improper sizing of renewable energy systems (RESs) exposes the entire system to power losses. This work presents an optimization of a system consisting of distributed generations. Firstly, PSO algorithms evaluate the size of the entire system on the IEEE bus 14 test standard. Secondly, the size of the system is allocated using improved Particles Swarm Optimization (IPSO). The convergence speed of the objective function enables a conjecture to be made about the robustness of the proposed system. The power and voltage profile on the IEEE 14-bus standard displays a decrease in power losses and an appropriate response to energy demands (EDs), validating the proposed method. 展开更多
关键词 Distributed Generations Deep learning Techniques Improved Particle Swarm Optimization power Losses power Losses Minimization Optimal Placement
下载PDF
Application of Multivariate Reinforcement Learning Engine in Optimizing the Power Generation Process of Domestic Waste Incineration
19
作者 Tao Ning Dunli Chen 《Journal of Electronic Research and Application》 2023年第5期30-41,共12页
Garbage incineration is an ideal method for the harmless and resource-oriented treatment of urban domestic waste.However,current domestic waste incineration power plants often face challenges related to maintaining co... Garbage incineration is an ideal method for the harmless and resource-oriented treatment of urban domestic waste.However,current domestic waste incineration power plants often face challenges related to maintaining consistent steam production and high operational costs.This article capitalizes on the technical advantages of big data artificial intelligence,optimizing the power generation process of domestic waste incineration as the entry point,and adopts four main engine modules of Alibaba Cloud reinforcement learning algorithm engine,operating parameter prediction engine,anomaly recognition engine,and video visual recognition algorithm engine.The reinforcement learning algorithm extracts the operational parameters of each incinerator to obtain a control benchmark.Through the operating parameter prediction algorithm,prediction models for drum pressure,primary steam flow,NOx,SO2,and HCl are constructed to achieve short-term prediction of operational parameters,ultimately improving control performance.The anomaly recognition algorithm develops a thickness identification model for the material layer in the drying section,allowing for rapid and effective assessment of feed material thickness to ensure uniformity control.Meanwhile,the visual recognition algorithm identifies flame images and assesses the combustion status and location of the combustion fire line within the furnace.This real-time understanding of furnace flame combustion conditions guides adjustments to the grate and air volume.Integrating AI technology into the waste incineration sector empowers the environmental protection industry with the potential to leverage big data.This development holds practical significance in optimizing the harmless and resource-oriented treatment of urban domestic waste,reducing operational costs,and increasing efficiency. 展开更多
关键词 Multivariable reinforcement learning engine Waste incineration power generation Visual recognition algorithm
下载PDF
Predicting the device performance of the perovskite solar cells from the experimental parameters through machine learning of existing experimental results 被引量:2
20
作者 Yao Lu Dong Wei +8 位作者 Wu Liu Juan Meng Xiaomin Huo Yu Zhang Zhiqin Liang Bo Qiao Suling Zhao Dandan Song Zheng Xu 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2023年第2期200-208,I0006,共10页
The performance of the metal halide perovskite solar cells(PSCs)highly relies on the experimental parameters,including the fabrication processes and the compositions of the perovskites;tremendous experimental work has... The performance of the metal halide perovskite solar cells(PSCs)highly relies on the experimental parameters,including the fabrication processes and the compositions of the perovskites;tremendous experimental work has been done to optimize these factors.However,predicting the device performance of the PSCs from the fabrication parameters before experiments is still challenging.Herein,we bridge this gap by machine learning(ML)based on a dataset including 1072 devices from peer-reviewed publications.The optimized ML model accurately predicts the PCE from the experimental parameters with a root mean square error of 1.28%and a Pearson coefficientr of 0.768.Moreover,the factors governing the device performance are ranked by shapley additive explanations(SHAP),among which,A-site cation is crucial to getting highly efficient PSCs.Experiments and density functional theory calculations are employed to validate and help explain the predicting results by the ML model.Our work reveals the feasibility of ML in predicting the device performance from the experimental parameters before experiments,which enables the reverse experimental design toward highly efficient PSCs. 展开更多
关键词 Machine learning Feature engineering Perovskite solar cells power conversion efficiency
下载PDF
上一页 1 2 127 下一页 到第
使用帮助 返回顶部