期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Evolutionary-assisted reinforcement learning for reservoir real-time production optimization under uncertainty 被引量:1
1
作者 Zhong-Zheng Wang Kai Zhang +6 位作者 Guo-Dong Chen Jin-Ding Zhang Wen-Dong Wang Hao-Chen Wang Li-Ming Zhang Xia Yan Jun Yao 《Petroleum Science》 SCIE EI CAS CSCD 2023年第1期261-276,共16页
Production optimization has gained increasing attention from the smart oilfield community because it can increase economic benefits and oil recovery substantially.While existing methods could produce high-optimality r... Production optimization has gained increasing attention from the smart oilfield community because it can increase economic benefits and oil recovery substantially.While existing methods could produce high-optimality results,they cannot be applied to real-time optimization for large-scale reservoirs due to high computational demands.In addition,most methods generally assume that the reservoir model is deterministic and ignore the uncertainty of the subsurface environment,making the obtained scheme unreliable for practical deployment.In this work,an efficient and robust method,namely evolutionaryassisted reinforcement learning(EARL),is proposed to achieve real-time production optimization under uncertainty.Specifically,the production optimization problem is modeled as a Markov decision process in which a reinforcement learning agent interacts with the reservoir simulator to train a control policy that maximizes the specified goals.To deal with the problems of brittle convergence properties and lack of efficient exploration strategies of reinforcement learning approaches,a population-based evolutionary algorithm is introduced to assist the training of agents,which provides diverse exploration experiences and promotes stability and robustness due to its inherent redundancy.Compared with prior methods that only optimize a solution for a particular scenario,the proposed approach trains a policy that can adapt to uncertain environments and make real-time decisions to cope with unknown changes.The trained policy,represented by a deep convolutional neural network,can adaptively adjust the well controls based on different reservoir states.Simulation results on two reservoir models show that the proposed approach not only outperforms the RL and EA methods in terms of optimization efficiency but also has strong robustness and real-time decision capacity. 展开更多
关键词 Production optimization Deep reinforcement learning Evolutionary algorithm Real-time optimization optimization under uncertainty
下载PDF
Inexact dynamic optimization for groundwater remediation planning and risk assessment under uncertainty
2
《Global Geology》 1998年第1期22-23,共2页
关键词 Inexact dynamic optimization for groundwater remediation planning and risk assessment under uncertainty
下载PDF
Scheduling Multi-Mode Projects under Uncertainty to Optimize Cash Flows: A Monte Carlo Ant Colony System Approach 被引量:3
3
作者 陈伟能 张军 《Journal of Computer Science & Technology》 SCIE EI CSCD 2012年第5期950-965,共16页
Project scheduling under uncertainty is a challenging field of research that has attracted increasing attention. While most existing studies only consider the single-mode project scheduling problem under uncertainty, ... Project scheduling under uncertainty is a challenging field of research that has attracted increasing attention. While most existing studies only consider the single-mode project scheduling problem under uncertainty, this paper aims to deal with a more realistic model called the stochastic multi-mode resource constrained project scheduling problem with discounted cash flows (S-MRCPSPDCF). In the model, activity durations and costs are given by random variables. The objective is to find an optimal baseline schedule so that the expected net present value (NPV) of cash flows is maximized. To solve the problem, an ant colony system (ACS) based approach is designed. The algorithm dispatches a group of ants to build baseline schedules iteratively using pheromones and an expected discounted cost (EDC) heuristic. Since it is impossible to evaluate the expected NPV directly due to the presence of random variables, the algorithm adopts the Monte Carlo (MC) simulation technique. As the ACS algorithm only uses the best-so-far solution to update pheromone values, it is found that a rough simulation with a small number of random scenarios is enough for evaluation. Thus the computational cost is reduced. Experimental results on 33 instances demonstrate the effectiveness of the proposed model and the ACS approach. 展开更多
关键词 project scheduling optimization under uncertainty cash flow ant colony optimization Monte Carlo simulation
原文传递
The “Iterated Weakest Link” Model of Adaptive Security Investment
4
作者 Rainer Böhme Tyler Moore 《Journal of Information Security》 2016年第2期81-102,共22页
We devise a model for security investment that reflects dynamic interaction between a defender, who faces uncertainty, and an attacker, who repeatedly targets the weakest link. Using the model, we derive and compare o... We devise a model for security investment that reflects dynamic interaction between a defender, who faces uncertainty, and an attacker, who repeatedly targets the weakest link. Using the model, we derive and compare optimal security investment over multiple periods, exploring the delicate balance between proactive and reactive security investment. We show how the best strategy depends on the defender’s knowledge about prospective attacks and the recoverability of costs when upgrading defenses reactively. Our model explains why security under-investment is sometimes rational even when effective defenses are available and can be deployed independently of other parties’ choices. Finally, we connect the model to real-world security problems by examining two case studies where empirical data are available: computers compromised for use in online crime and payment card security. 展开更多
关键词 Optimal Security Investment under uncertainty Return on Security Investment
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部