摘要
Stochastic dynamic programming (SDP) is extensively used in the optimization for long-term reservoir operations. Generally, both of the steady state optimal policy and its associated performance indices (PIs) for multipurpose reservoir are of prime importance. To derive the PIs there are two typical ways: simulation and probability formula. Among the disadvantages, one is that these approaches require the pre-specified operation policy. IHuminated by the convergence of objective function in SDP, a new approach, which has the advantage that its use can be concomitant with the solving of SDP, is proposed to determine the desired PIs. In the case study, its efficiency is also practically tested.
Stochastic dynamic programming (SDP) is extensively used in the optimization for long-term reservoir operations. Generally, both of the steady state optimal policy and its associated performance indices (PIs) for multipurpose reservoir are of prime importance. To derive the PIs there are two typical ways: simulation and probability formula. Among the disadvantages, one is that these approaches require the pre-specified operation policy. Illuminated by the convergence of objective function in SDP, a new approach, which has the advantage that its use can be concomitant with the solving of SDP, is proposed to determine the desired PIs. In the case study, its efficiency is also practically tested.
基金
Yunnan Natural Science Foundation under contract 98E004Z