This paper focused on the applying stochastic dynamic programming (SDP) to reservoir operation. Based on the two stages decision procedure, we built an operation model for reservoir operation to derive operating rules...This paper focused on the applying stochastic dynamic programming (SDP) to reservoir operation. Based on the two stages decision procedure, we built an operation model for reservoir operation to derive operating rules. With a case study of the China’s Three Gorges Reservoir, long-term operating rules are obtained. Based on the derived operating rules, the reservoir is simulated with the inflow from 1882 to 2005, which the mean hydropower generation is 85.71 billion kWh. It is shown that the SDP works well in the reservoir operation.展开更多
The stochastic dual dynamic programming (SDDP) algorithm is becoming increasingly used. In this paper we present analysis of different methods of lattice construction for SDDP exemplifying a realistic variant of the n...The stochastic dual dynamic programming (SDDP) algorithm is becoming increasingly used. In this paper we present analysis of different methods of lattice construction for SDDP exemplifying a realistic variant of the newsvendor problem, incorporating storage of production. We model several days of work and compare the profits realized using different methods of the lattice construction and the corresponding computer time spent in lattice construction. Our case differs from the known one because we consider not only a multidimensional but also a multistage case with stage dependence. We construct scenario lattice for different Markov processes which play a crucial role in stochastic modeling. The novelty of our work is comparing different methods of scenario lattice construction. We considered a realistic variant of the newsvendor problem. The results presented in this article show that the Voronoi method slightly outperforms others, but the k-means method is much faster overall.展开更多
This paper is concerned with the relationship between maximum principle and dynamic programming in zero-sum stochastic differential games. Under the assumption that the value function is enough smooth, relations among...This paper is concerned with the relationship between maximum principle and dynamic programming in zero-sum stochastic differential games. Under the assumption that the value function is enough smooth, relations among the adjoint processes, the generalized Hamiltonian function and the value function are given. A portfolio optimization problem under model uncertainty in the financial market is discussed to show the applications of our result.展开更多
A new deterministic formulation,called the conditional expectation formulation,is proposed for dynamic stochastic programming problems in order to overcome some disadvantages of existing deterministic formulations.We ...A new deterministic formulation,called the conditional expectation formulation,is proposed for dynamic stochastic programming problems in order to overcome some disadvantages of existing deterministic formulations.We then check the impact of the new deterministic formulation and other two deterministic formulations on the corresponding problem size,nonzero elements and solution time by solving some typical dynamic stochastic programming problems with different interior point algorithms.Numerical results show the advantage and application of the new deterministic formulation.展开更多
A novel approach was proposed to allocate spinning reserve for dynamic economic dispatch.The proposed approach set up a two-stage stochastic programming model to allocate reserve.The model was solved using a decompose...A novel approach was proposed to allocate spinning reserve for dynamic economic dispatch.The proposed approach set up a two-stage stochastic programming model to allocate reserve.The model was solved using a decomposed algorithm based on Benders' decomposition.The model and the algorithm were applied to a simple 3-node system and an actual 445-node system for verification,respectively.Test results show that the model can save 84.5 US $ cost for the testing three-node system,and the algorithm can solve the model for 445-node system within 5 min.The test results also illustrate that the proposed approach is efficient and suitable for large system calculation.展开更多
This paper extends Slutsky’s classic work on consumer theory to a random horizon stochastic dynamic framework in which the consumer has an inter-temporal planning horizon with uncertainties in future incomes and life...This paper extends Slutsky’s classic work on consumer theory to a random horizon stochastic dynamic framework in which the consumer has an inter-temporal planning horizon with uncertainties in future incomes and life span. Utility maximization leading to a set of ordinary wealth-dependent demand functions is performed. A dual problem is set up to derive the wealth compensated demand functions. This represents the first time that wealth-dependent ordinary demand functions and wealth compensated demand functions are obtained under these uncertainties. The corresponding Roy’s identity relationships and a set of random horizon stochastic dynamic Slutsky equations are then derived. The extension incorporates realistic characteristics in consumer theory and advances the conventional microeconomic study on consumption to a more realistic optimal control framework.展开更多
In this paper,we propose an analytical stochastic dynamic programming(SDP)algorithm to address the optimal management problem of price-maker community energy storage.As a price-maker,energy storage smooths price diffe...In this paper,we propose an analytical stochastic dynamic programming(SDP)algorithm to address the optimal management problem of price-maker community energy storage.As a price-maker,energy storage smooths price differences,thus decreasing energy arbitrage value.However,this price-smoothing effect can result in significant external welfare changes by reduc-ing consumer costs and producer revenues,which is not negligible for the community with energy storage systems.As such,we formulate community storage management as an SDP that aims to maximize both energy arbitrage and community welfare.To incorporate market interaction into the SDP format,we propose a framework that derives partial but sufficient market information to approximate impact of storage operations on market prices.Then we present an analytical SDP algorithm that does not require state discretization.Apart from computational efficiency,another advantage of the analytical algorithm is to guide energy storage to charge/discharge by directly comparing its current marginal value with expected future marginal value.Case studies indicate community-owned energy storage that maximizes both arbitrage and welfare value gains more benefits than storage that maximizes only arbitrage.The proposed algorithm ensures optimality and largely reduces the computational complexity of the standard SDP.Index Terms-Analytical stochastic dynamic programming,energy management,energy storage,price-maker,social welfare.展开更多
A stochastic optimal control strategy for partially observable nonlinear quasi Hamiltonian systems is proposed. The optimal control forces consist of two parts. The first part is determined by the conditions under whi...A stochastic optimal control strategy for partially observable nonlinear quasi Hamiltonian systems is proposed. The optimal control forces consist of two parts. The first part is determined by the conditions under which the stochastic optimal control problem of a partially observable nonlinear system is converted into that of a completely observable linear system. The second part is determined by solving the dynamical programming equation derived by applying the stochastic averaging method and stochastic dynamical programming principle to the completely observable linear control system. The response of the optimally controlled quasi Hamiltonian system is predicted by solving the averaged Fokker-Planck-Kolmogorov equation associated with the optimally controlled completely observable linear system and solving the Riccati equation for the estimated error of system states. An example is given to illustrate the procedure and effectiveness of the proposed control strategy.展开更多
A new stochastic optimal control strategy for randomly excited quasi-integrable Hamiltonian systems using magneto-rheological (MR) dampers is proposed. The dynamic be- havior of an MR damper is characterized by the ...A new stochastic optimal control strategy for randomly excited quasi-integrable Hamiltonian systems using magneto-rheological (MR) dampers is proposed. The dynamic be- havior of an MR damper is characterized by the Bouc-Wen hysteretic model. The control force produced by the MR damper is separated into a passive part incorporated in the uncontrolled system and a semi-active part to be determined. The system combining the Bouc-Wen hysteretic force is converted into an equivalent non-hysteretic nonlinear stochastic control system. Then It?o stochastic di?erential equations are derived from the equivalent system by using the stochastic averaging method. A dynamical programming equation for the controlled di?usion processes is established based on the stochastic dynamical programming principle. The non-clipping nonlin- ear optimal control law is obtained for a certain performance index by minimizing the dynamical programming equation. Finally, an example is given to illustrate the application and e?ectiveness of the proposed control strategy.展开更多
In this paper,an adaptive dynamic programming(ADP)strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation.To save the communication resources between th...In this paper,an adaptive dynamic programming(ADP)strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation.To save the communication resources between the controller and the actuators,stochastic communication protocols(SCPs)are adopted to schedule the control signal,and therefore the closed-loop system is essentially a protocol-induced switching system.A neural network(NN)-based identifier with a robust term is exploited for approximating the unknown nonlinear system,and a set of switch-based updating rules with an additional tunable parameter of NN weights are developed with the help of the gradient descent.By virtue of a novel Lyapunov function,a sufficient condition is proposed to achieve the stability of both system identification errors and the update dynamics of NN weights.Then,a value iterative ADP algorithm in an offline way is proposed to solve the optimal control of protocol-induced switching systems with saturation constraints,and the convergence is profoundly discussed in light of mathematical induction.Furthermore,an actor-critic NN scheme is developed to approximate the control law and the proposed performance index function in the framework of ADP,and the stability of the closed-loop system is analyzed in view of the Lyapunov theory.Finally,the numerical simulation results are presented to demonstrate the effectiveness of the proposed control scheme.展开更多
In this paper two different control strategies designed to alleviate the response of quasi partially integrable Hamiltonian systems subjected to stochastic excitation are proposed. First, by using the stochastic avera...In this paper two different control strategies designed to alleviate the response of quasi partially integrable Hamiltonian systems subjected to stochastic excitation are proposed. First, by using the stochastic averaging method for quasi partially integrable Hamiltonian systems, an n-DOF controlled quasi partially integrable Hamiltonian system with stochastic excitation is converted into a set of partially averaged It^↑o stochastic differential equations. Then, the dynamical programming equation associated with the partially averaged It^↑o equations is formulated by applying the stochastic dynamical programming principle. In the first control strategy, the optimal control law is derived from the dynamical programming equation and the control constraints without solving the dynamical programming equation. In the second control strategy, the optimal control law is obtained by solving the dynamical programming equation. Finally, both the responses of controlled and uncontrolled systems are predicted through solving the Fokker-Plank-Kolmogorov equation associated with fully averaged It^↑o equations. An example is worked out to illustrate the application and effectiveness of the two proposed control strategies.展开更多
A strategy is proposed based on the stochastic averaging method for quasi non- integrable Hamiltonian systems and the stochastic dynamical programming principle.The pro- posed strategy can be used to design nonlinear ...A strategy is proposed based on the stochastic averaging method for quasi non- integrable Hamiltonian systems and the stochastic dynamical programming principle.The pro- posed strategy can be used to design nonlinear stochastic optimal control to minimize the response of quasi non-integrable Hamiltonian systems subject to Gaussian white noise excitation.By using the stochastic averaging method for quasi non-integrable Hamiltonian systems the equations of motion of a controlled quasi non-integrable Hamiltonian system is reduced to a one-dimensional av- eraged It stochastic differential equation.By using the stochastic dynamical programming princi- ple the dynamical programming equation for minimizing the response of the system is formulated. The optimal control law is derived from the dynamical programming equation and the bounded control constraints.The response of optimally controlled systems is predicted through solving the FPK equation associated with It stochastic differential equation.An example is worked out in detail to illustrate the application of the control strategy proposed.展开更多
In this paper, the optimal viability decision problem of linear discrete-time stochastic systems with probability criterion is investigated. Under the condition of sequence-reachable discrete-time dynamic systems, the...In this paper, the optimal viability decision problem of linear discrete-time stochastic systems with probability criterion is investigated. Under the condition of sequence-reachable discrete-time dynamic systems, the existence theorem of optimal viability strategy is given and the solving procedure of the optimal strategy is provided based on dynamic programming. A numerical example shows the effectiveness of the proposed methods.展开更多
A modified nonlinear stochastic optimal bounded control strategy for random excited hysteretic systems with actuator saturation is proposed. First, a controlled hysteretic system is converted into an equivalent nonlin...A modified nonlinear stochastic optimal bounded control strategy for random excited hysteretic systems with actuator saturation is proposed. First, a controlled hysteretic system is converted into an equivalent nonlinear nonhysteretic stochastic system. Then, the partially averaged Itoe stochastic differential equation and dynamical programming equation are established, respectively, by using the stochastic averaging method for quasi non-integrable Hamiltonian systems and stochastic dynamical programming principle, from which the optimal control law consisting of optimal unbounded control and bang-bang control is derived. Finally, the response of optimally controlled system is predicted by solving the Fokker-Planck-Kolmogorov (FPK) equation associated with the fully averaged Itoe equation. Numerical results show that the proposed control strategy has high control effectiveness and efficiency.展开更多
To investigate the equilibrium relationships between the volatility of capital and income, taxation, and ance in a stochastic control model, the uniqueness of the solution to this model was proved by using the method ...To investigate the equilibrium relationships between the volatility of capital and income, taxation, and ance in a stochastic control model, the uniqueness of the solution to this model was proved by using the method of dynamic programming under the introduction of distributive disturbance and elastic labor supply. Furthermore, the effects of two types of shocks on labor-leisure choice, economic growth rate and welfare were numerically analyzed, and then the optimal tax policy was derived.展开更多
A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations ...A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method. The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems. The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation. A bilinear controller by using the direct method of Lyapunov is introduced. The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency.展开更多
Many studies have considered the solution of Unit Commitment problems for the management of energy networks. In this field, earlier work addressed the problem in determinist cases and in cases dealing with demand unce...Many studies have considered the solution of Unit Commitment problems for the management of energy networks. In this field, earlier work addressed the problem in determinist cases and in cases dealing with demand uncertainties. In this paper, the authors develop a method to deal with uncertainties related to the cost function. Indeed, such uncertainties often occur in energy networks (waste incinerator with a priori unknown waste amounts, cogeneration plant with uncertainty of the sold electricity price...). The corresponding optimization problems are large scale stochastic non-linear mixed integer problems. The developed solution method is a recourse based programming one. The main idea is to consider that amounts of energy to produce can be slightly adapted in real time, whereas the on/off statuses of units have to be decided very early in the management procedure. Results show that the proposed approach remains compatible with existing Unit Commitment programming methods and presents an obvious interest with reasonable computing loads.展开更多
文摘This paper focused on the applying stochastic dynamic programming (SDP) to reservoir operation. Based on the two stages decision procedure, we built an operation model for reservoir operation to derive operating rules. With a case study of the China’s Three Gorges Reservoir, long-term operating rules are obtained. Based on the derived operating rules, the reservoir is simulated with the inflow from 1882 to 2005, which the mean hydropower generation is 85.71 billion kWh. It is shown that the SDP works well in the reservoir operation.
文摘The stochastic dual dynamic programming (SDDP) algorithm is becoming increasingly used. In this paper we present analysis of different methods of lattice construction for SDDP exemplifying a realistic variant of the newsvendor problem, incorporating storage of production. We model several days of work and compare the profits realized using different methods of the lattice construction and the corresponding computer time spent in lattice construction. Our case differs from the known one because we consider not only a multidimensional but also a multistage case with stage dependence. We construct scenario lattice for different Markov processes which play a crucial role in stochastic modeling. The novelty of our work is comparing different methods of scenario lattice construction. We considered a realistic variant of the newsvendor problem. The results presented in this article show that the Voronoi method slightly outperforms others, but the k-means method is much faster overall.
文摘This paper is concerned with the relationship between maximum principle and dynamic programming in zero-sum stochastic differential games. Under the assumption that the value function is enough smooth, relations among the adjoint processes, the generalized Hamiltonian function and the value function are given. A portfolio optimization problem under model uncertainty in the financial market is discussed to show the applications of our result.
基金This research was partially supported by the Natural Science Research Foundation of Shaanxi Province(2001SL09)
文摘A new deterministic formulation,called the conditional expectation formulation,is proposed for dynamic stochastic programming problems in order to overcome some disadvantages of existing deterministic formulations.We then check the impact of the new deterministic formulation and other two deterministic formulations on the corresponding problem size,nonzero elements and solution time by solving some typical dynamic stochastic programming problems with different interior point algorithms.Numerical results show the advantage and application of the new deterministic formulation.
基金Projects(51007047,51077087)supported by the National Natural Science Foundation of ChinaProject(2013CB228205)supported by the National Key Basic Research Program of China+1 种基金Project(20100131120039)supported by Higher Learning Doctor Discipline End Scientific Research Fund of the Ministry of Education Institution,ChinaProject(ZR2010EQ035)supported by the Natural Science Foundation of Shandong Province,China
文摘A novel approach was proposed to allocate spinning reserve for dynamic economic dispatch.The proposed approach set up a two-stage stochastic programming model to allocate reserve.The model was solved using a decomposed algorithm based on Benders' decomposition.The model and the algorithm were applied to a simple 3-node system and an actual 445-node system for verification,respectively.Test results show that the model can save 84.5 US $ cost for the testing three-node system,and the algorithm can solve the model for 445-node system within 5 min.The test results also illustrate that the proposed approach is efficient and suitable for large system calculation.
文摘This paper extends Slutsky’s classic work on consumer theory to a random horizon stochastic dynamic framework in which the consumer has an inter-temporal planning horizon with uncertainties in future incomes and life span. Utility maximization leading to a set of ordinary wealth-dependent demand functions is performed. A dual problem is set up to derive the wealth compensated demand functions. This represents the first time that wealth-dependent ordinary demand functions and wealth compensated demand functions are obtained under these uncertainties. The corresponding Roy’s identity relationships and a set of random horizon stochastic dynamic Slutsky equations are then derived. The extension incorporates realistic characteristics in consumer theory and advances the conventional microeconomic study on consumption to a more realistic optimal control framework.
基金supported in part by the Joint Funds of the National Natural Science Foundation of China(U2066214)in part by Shanghai Sailing Program(22YF1414500)in part by the Project(SKLD22KM19)funded by State Key Laboratory of Power System Operation and Control.
文摘In this paper,we propose an analytical stochastic dynamic programming(SDP)algorithm to address the optimal management problem of price-maker community energy storage.As a price-maker,energy storage smooths price differences,thus decreasing energy arbitrage value.However,this price-smoothing effect can result in significant external welfare changes by reduc-ing consumer costs and producer revenues,which is not negligible for the community with energy storage systems.As such,we formulate community storage management as an SDP that aims to maximize both energy arbitrage and community welfare.To incorporate market interaction into the SDP format,we propose a framework that derives partial but sufficient market information to approximate impact of storage operations on market prices.Then we present an analytical SDP algorithm that does not require state discretization.Apart from computational efficiency,another advantage of the analytical algorithm is to guide energy storage to charge/discharge by directly comparing its current marginal value with expected future marginal value.Case studies indicate community-owned energy storage that maximizes both arbitrage and welfare value gains more benefits than storage that maximizes only arbitrage.The proposed algorithm ensures optimality and largely reduces the computational complexity of the standard SDP.Index Terms-Analytical stochastic dynamic programming,energy management,energy storage,price-maker,social welfare.
基金Project supported by the National Natural Science Foundation ofChina (No. 10332030), the Special Fund for Doctor Programs inInstitutions of Higher Learning of China (No. 20020335092), andthe Zhejiang Provincial Natural Science Foundation (No. 101046),China
文摘A stochastic optimal control strategy for partially observable nonlinear quasi Hamiltonian systems is proposed. The optimal control forces consist of two parts. The first part is determined by the conditions under which the stochastic optimal control problem of a partially observable nonlinear system is converted into that of a completely observable linear system. The second part is determined by solving the dynamical programming equation derived by applying the stochastic averaging method and stochastic dynamical programming principle to the completely observable linear control system. The response of the optimally controlled quasi Hamiltonian system is predicted by solving the averaged Fokker-Planck-Kolmogorov equation associated with the optimally controlled completely observable linear system and solving the Riccati equation for the estimated error of system states. An example is given to illustrate the procedure and effectiveness of the proposed control strategy.
基金Project supported by the Zhejiang Provincial Natural Sciences Foundation (No. 101046) and the foundation fromHong Kong RGC (No. PolyU 5051/02E).
文摘A new stochastic optimal control strategy for randomly excited quasi-integrable Hamiltonian systems using magneto-rheological (MR) dampers is proposed. The dynamic be- havior of an MR damper is characterized by the Bouc-Wen hysteretic model. The control force produced by the MR damper is separated into a passive part incorporated in the uncontrolled system and a semi-active part to be determined. The system combining the Bouc-Wen hysteretic force is converted into an equivalent non-hysteretic nonlinear stochastic control system. Then It?o stochastic di?erential equations are derived from the equivalent system by using the stochastic averaging method. A dynamical programming equation for the controlled di?usion processes is established based on the stochastic dynamical programming principle. The non-clipping nonlin- ear optimal control law is obtained for a certain performance index by minimizing the dynamical programming equation. Finally, an example is given to illustrate the application and e?ectiveness of the proposed control strategy.
基金supported in part by the Australian Research Council Discovery Early Career Researcher Award(DE200101128)Australian Research Council(DP190101557)。
文摘In this paper,an adaptive dynamic programming(ADP)strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation.To save the communication resources between the controller and the actuators,stochastic communication protocols(SCPs)are adopted to schedule the control signal,and therefore the closed-loop system is essentially a protocol-induced switching system.A neural network(NN)-based identifier with a robust term is exploited for approximating the unknown nonlinear system,and a set of switch-based updating rules with an additional tunable parameter of NN weights are developed with the help of the gradient descent.By virtue of a novel Lyapunov function,a sufficient condition is proposed to achieve the stability of both system identification errors and the update dynamics of NN weights.Then,a value iterative ADP algorithm in an offline way is proposed to solve the optimal control of protocol-induced switching systems with saturation constraints,and the convergence is profoundly discussed in light of mathematical induction.Furthermore,an actor-critic NN scheme is developed to approximate the control law and the proposed performance index function in the framework of ADP,and the stability of the closed-loop system is analyzed in view of the Lyapunov theory.Finally,the numerical simulation results are presented to demonstrate the effectiveness of the proposed control scheme.
基金The project supported by the National Natural Science Foundation of China (10332030)Research Fund for Doctoral Program of Higher Education of China(20060335125)
文摘In this paper two different control strategies designed to alleviate the response of quasi partially integrable Hamiltonian systems subjected to stochastic excitation are proposed. First, by using the stochastic averaging method for quasi partially integrable Hamiltonian systems, an n-DOF controlled quasi partially integrable Hamiltonian system with stochastic excitation is converted into a set of partially averaged It^↑o stochastic differential equations. Then, the dynamical programming equation associated with the partially averaged It^↑o equations is formulated by applying the stochastic dynamical programming principle. In the first control strategy, the optimal control law is derived from the dynamical programming equation and the control constraints without solving the dynamical programming equation. In the second control strategy, the optimal control law is obtained by solving the dynamical programming equation. Finally, both the responses of controlled and uncontrolled systems are predicted through solving the Fokker-Plank-Kolmogorov equation associated with fully averaged It^↑o equations. An example is worked out to illustrate the application and effectiveness of the two proposed control strategies.
基金Project supported by the National Natural Science Foundation of China(No.19972059).
文摘A strategy is proposed based on the stochastic averaging method for quasi non- integrable Hamiltonian systems and the stochastic dynamical programming principle.The pro- posed strategy can be used to design nonlinear stochastic optimal control to minimize the response of quasi non-integrable Hamiltonian systems subject to Gaussian white noise excitation.By using the stochastic averaging method for quasi non-integrable Hamiltonian systems the equations of motion of a controlled quasi non-integrable Hamiltonian system is reduced to a one-dimensional av- eraged It stochastic differential equation.By using the stochastic dynamical programming princi- ple the dynamical programming equation for minimizing the response of the system is formulated. The optimal control law is derived from the dynamical programming equation and the bounded control constraints.The response of optimally controlled systems is predicted through solving the FPK equation associated with It stochastic differential equation.An example is worked out in detail to illustrate the application of the control strategy proposed.
基金supported by the National Natural Science Foundation of China (No.70471049)
文摘In this paper, the optimal viability decision problem of linear discrete-time stochastic systems with probability criterion is investigated. Under the condition of sequence-reachable discrete-time dynamic systems, the existence theorem of optimal viability strategy is given and the solving procedure of the optimal strategy is provided based on dynamic programming. A numerical example shows the effectiveness of the proposed methods.
基金the National Natural Science Foundation of China (Nos. 10332030 and 10772159)the Research Fund for Doctoral Program of Higher Education of China (No. 20060335125)the Foundation of ECUST (East China University of Science and Tech-nology) for Outstanding Young Teachers (No. YH0157105), China
文摘A modified nonlinear stochastic optimal bounded control strategy for random excited hysteretic systems with actuator saturation is proposed. First, a controlled hysteretic system is converted into an equivalent nonlinear nonhysteretic stochastic system. Then, the partially averaged Itoe stochastic differential equation and dynamical programming equation are established, respectively, by using the stochastic averaging method for quasi non-integrable Hamiltonian systems and stochastic dynamical programming principle, from which the optimal control law consisting of optimal unbounded control and bang-bang control is derived. Finally, the response of optimally controlled system is predicted by solving the Fokker-Planck-Kolmogorov (FPK) equation associated with the fully averaged Itoe equation. Numerical results show that the proposed control strategy has high control effectiveness and efficiency.
文摘To investigate the equilibrium relationships between the volatility of capital and income, taxation, and ance in a stochastic control model, the uniqueness of the solution to this model was proved by using the method of dynamic programming under the introduction of distributive disturbance and elastic labor supply. Furthermore, the effects of two types of shocks on labor-leisure choice, economic growth rate and welfare were numerically analyzed, and then the optimal tax policy was derived.
基金supported by the National Natural Science Foundation of China (11072212,10932009)the Zhejiang Natural Science Foundation of China (7080070)
文摘A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method. The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems. The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation. A bilinear controller by using the direct method of Lyapunov is introduced. The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency.
文摘Many studies have considered the solution of Unit Commitment problems for the management of energy networks. In this field, earlier work addressed the problem in determinist cases and in cases dealing with demand uncertainties. In this paper, the authors develop a method to deal with uncertainties related to the cost function. Indeed, such uncertainties often occur in energy networks (waste incinerator with a priori unknown waste amounts, cogeneration plant with uncertainty of the sold electricity price...). The corresponding optimization problems are large scale stochastic non-linear mixed integer problems. The developed solution method is a recourse based programming one. The main idea is to consider that amounts of energy to produce can be slightly adapted in real time, whereas the on/off statuses of units have to be decided very early in the management procedure. Results show that the proposed approach remains compatible with existing Unit Commitment programming methods and presents an obvious interest with reasonable computing loads.