We consider variations of the classical jeep problems: the optimal logistics for a caravan of jeeps which travel together in the desert. The main purpose is to arrange the travels for the one-way trip and the round t...We consider variations of the classical jeep problems: the optimal logistics for a caravan of jeeps which travel together in the desert. The main purpose is to arrange the travels for the one-way trip and the round trip of a caravan of jeeps so that the chief jeep visits the farthest destination. Based on the dynamic program principle, the maximum distances for the caravan when only part of the jeeps should return and when all drivers should return are obtained. Some related results such as the efficiency of the abandoned jeeps, and the advantages of more jeeps in the caravan are also presented.展开更多
The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and s...The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent non- hysteretic system. Stochastic averaging is then implemented to obtain the It5 stochastic equation associated with the total energy of the vibrating system, appropriate for eval- uating system responses. Dynamical programming equations for maximizing system re- liability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equation. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example.展开更多
We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, wher...We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, where the lower and the upper value functions are defined through this BSDE. The existence and the uniqueness theorem and comparison theorem are proved for such equations with the help of an iteration method. We also show that the lower and the upper value functions satisfy the dynamic programming principle. Moreover, we study the associated Hamilton-Jacobi-Bellman-Isaacs(HJB-Isaacs)equations, which are nonlocal, and strongly coupled with the lower and the upper value functions. Using a new method, we characterize the pair(W, U) consisting of the lower and the upper value functions as the unique viscosity solution of our nonlocal HJB-Isaacs equation. Furthermore, the game has a value under the Isaacs’ condition.展开更多
A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations ...A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method. The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems. The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation. A bilinear controller by using the direct method of Lyapunov is introduced. The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency.展开更多
This paper investigates an optimal investment strategy on consumption and portfolio problem, in which the investor must withdraw funds continuously at a given rate. By analyzing the evolving process of wealth, we give...This paper investigates an optimal investment strategy on consumption and portfolio problem, in which the investor must withdraw funds continuously at a given rate. By analyzing the evolving process of wealth, we give the definition of safe-region for investment. Moreover, in order to obtain the target wealth as quickly as possible, using Bellman dynamic programming principle, we get the optimal investment strategy and corresponding necessary expected time. At last we give some numerical computations for a set of different parameters.展开更多
The authors prove a sufficient stochastic maximum principle for the optimal control of a forward-backward Markov regime switching jump diffusion system and show its connection to dynamic programming principle. The res...The authors prove a sufficient stochastic maximum principle for the optimal control of a forward-backward Markov regime switching jump diffusion system and show its connection to dynamic programming principle. The result is applied to a cash flow valuation problem with terminal wealth constraint in a financial market. An explicit optimal strategy is obtained in this example.展开更多
We will study the following problem.Let X_t,t∈[0,T],be an R^d-valued process defined on atime interval t∈[0,T].Let Y be a random value depending on the trajectory of X.Assume that,at each fixedtime t≤T,the informat...We will study the following problem.Let X_t,t∈[0,T],be an R^d-valued process defined on atime interval t∈[0,T].Let Y be a random value depending on the trajectory of X.Assume that,at each fixedtime t≤T,the information available to an agent(an individual,a firm,or even a market)is the trajectory ofX before t.Thus at time T,the random value of Y(ω) will become known to this agent.The question is:howwill this agent evaluate Y at the time t?We will introduce an evaluation operator ε_t[Y] to define the value of Y given by this agent at time t.Thisoperator ε_t[·] assigns an (X_s)0(?)s(?)T-dependent random variable Y to an (X_s)0(?)s(?)t-dependent random variableε_t[Y].We will mainly treat the situation in which the process X is a solution of a SDE (see equation (3.1)) withthe drift coefficient b and diffusion coefficient σ containing an unknown parameter θ=θ_t.We then consider theso called super evaluation when the agent is a seller of the asset Y.We will prove that such super evaluation is afiltration consistent nonlinear expectation.In some typical situations,we will prove that a filtration consistentnonlinear evaluation dominated by this super evaluation is a g-evaluation.We also consider the correspondingnonlinear Markovian situation.展开更多
This paper discusses mean-field backward stochastic differentiM equations (mean-field BS- DEs) with jumps and a new type of controlled mean-field BSDEs with jumps, namely mean-field BSDEs with jumps strongly coupled...This paper discusses mean-field backward stochastic differentiM equations (mean-field BS- DEs) with jumps and a new type of controlled mean-field BSDEs with jumps, namely mean-field BSDEs with jumps strongly coupled with the value function of the associated control problem. The authors first prove the existence and the uniqueness as well as a comparison theorem for the above two types of BSDEs. For this the authors use an approximation method. Then, with the help of the notion of stochastic backward semigroups introduced by Peng in 1997, the authors get the dynamic programming principle (DPP) for the value functions. Furthermore, the authors prove that the value function is a viscosity solution of the associated nonlocal Hamilton-Jacobi-Bellman (HJB) integro-partial differential equation, which is unique in an adequate space of continuous functions introduced by Barles, et al. in 1997.展开更多
This paper focuses on zero-sum stochastic differential games in the framework of forwardbackward stochastic differential equations on a finite time horizon with both players adopting impulse controls.By means of BSDE ...This paper focuses on zero-sum stochastic differential games in the framework of forwardbackward stochastic differential equations on a finite time horizon with both players adopting impulse controls.By means of BSDE methods,in particular that of the notion from Peng’s stochastic backward semigroups,the authors prove a dynamic programming principle for both the upper and the lower value functions of the game.The upper and the lower value functions are then shown to be the unique viscosity solutions of the Hamilton-Jacobi-Bellman-Isaacs equations with a double-obstacle.As a consequence,the uniqueness implies that the upper and lower value functions coincide and the game admits a value.展开更多
In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the d...In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the dynamic programming principle for the upper and the lower value functions of this kind of stochastic differential games with reflection in a straightforward way. Then the upper and the lower value functions are proved to be the unique viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bettman-Isaacs equations with obstacles, respectively. The method differs significantly from those used for control problems with reflection, with new techniques developed of interest on its own. Further, we also prove a new estimate for RBSDEs being sharper than that in the paper of E1 Karoui, Kapoudjian, Pardoux, Peng and Quenez (1997), which turns out to be very useful because it allows us to estimate the LP-distance of the solutions of two different RBSDEs by the p-th power of the distance of the initial values of the driving forward equations. We also show that the unique viscosity solution to the approximating Isaacs equation constructed by the penalization method converges to the viscosity solution of the Isaacs equation with obstacle.展开更多
To enhance the reliability of the stochastically excited structure,it is significant to study the problem of stochastic optimal control for minimizing first-passage failure.Combining the stochastic averaging method wi...To enhance the reliability of the stochastically excited structure,it is significant to study the problem of stochastic optimal control for minimizing first-passage failure.Combining the stochastic averaging method with dynamical programming principle,we study the optimal control for minimizing first-passage failure of multidegrees-of-freedom(MDoF)nonlinear oscillators under Gaussian white noise excitations.The equations of motion of the controlled system are reduced to time homogenous difusion processes by stochastic averaging.The optimal control law is determined by the dynamical programming equations and the control constraint.The backward Kolmogorov(BK)equation and the Pontryagin equation are established to obtain the conditional reliability function and mean first-passage time(MFPT)of the optimally controlled system,respectively.An example has shown that the proposed control strategy can increase the reliability and MFPT of the original system,and the mathematical treatment is also facilitated.展开更多
基金partially Supported by National Natural Science Foundation of China(70571079,60534080)China Postdoctoral Science Foundation(20100471140)
文摘We consider variations of the classical jeep problems: the optimal logistics for a caravan of jeeps which travel together in the desert. The main purpose is to arrange the travels for the one-way trip and the round trip of a caravan of jeeps so that the chief jeep visits the farthest destination. Based on the dynamic program principle, the maximum distances for the caravan when only part of the jeeps should return and when all drivers should return are obtained. Some related results such as the efficiency of the abandoned jeeps, and the advantages of more jeeps in the caravan are also presented.
基金supported by the National Natural Science Foundation of China(Nos.11202181 and11402258)the Special Fund for the Doctoral Program of Higher Education of China(No.20120101120171)
文摘The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent non- hysteretic system. Stochastic averaging is then implemented to obtain the It5 stochastic equation associated with the total energy of the vibrating system, appropriate for eval- uating system responses. Dynamical programming equations for maximizing system re- liability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equation. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example.
基金supported by the NSF of China(11071144,11171187,11222110 and 71671104)Shandong Province(BS2011SF010,JQ201202)+4 种基金SRF for ROCS(SEM)Program for New Century Excellent Talents in University(NCET-12-0331)111 Project(B12023)the Ministry of Education of Humanities and Social Science Project(16YJA910003)Incubation Group Project of Financial Statistics and Risk Management of SDUFE
文摘We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, where the lower and the upper value functions are defined through this BSDE. The existence and the uniqueness theorem and comparison theorem are proved for such equations with the help of an iteration method. We also show that the lower and the upper value functions satisfy the dynamic programming principle. Moreover, we study the associated Hamilton-Jacobi-Bellman-Isaacs(HJB-Isaacs)equations, which are nonlocal, and strongly coupled with the lower and the upper value functions. Using a new method, we characterize the pair(W, U) consisting of the lower and the upper value functions as the unique viscosity solution of our nonlocal HJB-Isaacs equation. Furthermore, the game has a value under the Isaacs’ condition.
基金supported by the National Natural Science Foundation of China (11072212,10932009)the Zhejiang Natural Science Foundation of China (7080070)
文摘A stochastic optimal control strategy for a slightly sagged cable using support motion in the cable axial direction is proposed. The nonlinear equation of cable motion in plane is derived and reduced to the equations for the first two modes of cable vibration by using the Galerkin method. The partially averaged Ito equation for controlled system energy is further derived by applying the stochastic averaging method for quasi-non-integrable Hamiltonian systems. The dynamical programming equation for the controlled system energy with a performance index is established by applying the stochastic dynamical programming principle and a stochastic optimal control law is obtained through solving the dynamical programming equation. A bilinear controller by using the direct method of Lyapunov is introduced. The comparison between the two controllers shows that the proposed stochastic optimal control strategy is superior to the bilinear control strategy in terms of higher control effectiveness and efficiency.
文摘This paper investigates an optimal investment strategy on consumption and portfolio problem, in which the investor must withdraw funds continuously at a given rate. By analyzing the evolving process of wealth, we give the definition of safe-region for investment. Moreover, in order to obtain the target wealth as quickly as possible, using Bellman dynamic programming principle, we get the optimal investment strategy and corresponding necessary expected time. At last we give some numerical computations for a set of different parameters.
基金supported by the National Natural Science Foundation of China(No.61573217)the 111 Project(No.B12023)the National High-level Personnel of Special Support Program and the Chang Jiang Scholar Program of the Ministry of Education of China
文摘The authors prove a sufficient stochastic maximum principle for the optimal control of a forward-backward Markov regime switching jump diffusion system and show its connection to dynamic programming principle. The result is applied to a cash flow valuation problem with terminal wealth constraint in a financial market. An explicit optimal strategy is obtained in this example.
基金Supported in part by National Natural Science Foundation of China Grant (No.10131040).The author also thanks the referee's constructive suggestions.
文摘We will study the following problem.Let X_t,t∈[0,T],be an R^d-valued process defined on atime interval t∈[0,T].Let Y be a random value depending on the trajectory of X.Assume that,at each fixedtime t≤T,the information available to an agent(an individual,a firm,or even a market)is the trajectory ofX before t.Thus at time T,the random value of Y(ω) will become known to this agent.The question is:howwill this agent evaluate Y at the time t?We will introduce an evaluation operator ε_t[Y] to define the value of Y given by this agent at time t.Thisoperator ε_t[·] assigns an (X_s)0(?)s(?)T-dependent random variable Y to an (X_s)0(?)s(?)t-dependent random variableε_t[Y].We will mainly treat the situation in which the process X is a solution of a SDE (see equation (3.1)) withthe drift coefficient b and diffusion coefficient σ containing an unknown parameter θ=θ_t.We then consider theso called super evaluation when the agent is a seller of the asset Y.We will prove that such super evaluation is afiltration consistent nonlinear expectation.In some typical situations,we will prove that a filtration consistentnonlinear evaluation dominated by this super evaluation is a g-evaluation.We also consider the correspondingnonlinear Markovian situation.
基金supported by the National Natural Science Foundation of China under Grant Nos.11171187,11222110Shandong Province under Grant No.JQ201202+1 种基金Program for New Century Excellent Talents in University under Grant No.NCET-12-0331111 Project under Grant No.B12023
文摘This paper discusses mean-field backward stochastic differentiM equations (mean-field BS- DEs) with jumps and a new type of controlled mean-field BSDEs with jumps, namely mean-field BSDEs with jumps strongly coupled with the value function of the associated control problem. The authors first prove the existence and the uniqueness as well as a comparison theorem for the above two types of BSDEs. For this the authors use an approximation method. Then, with the help of the notion of stochastic backward semigroups introduced by Peng in 1997, the authors get the dynamic programming principle (DPP) for the value functions. Furthermore, the authors prove that the value function is a viscosity solution of the associated nonlocal Hamilton-Jacobi-Bellman (HJB) integro-partial differential equation, which is unique in an adequate space of continuous functions introduced by Barles, et al. in 1997.
基金supported by the National Nature Science Foundation of China under Grant Nos.11701040,11871010,61871058the Fundamental Research Funds for the Central Universities under Grant No.2019XDA11。
文摘This paper focuses on zero-sum stochastic differential games in the framework of forwardbackward stochastic differential equations on a finite time horizon with both players adopting impulse controls.By means of BSDE methods,in particular that of the notion from Peng’s stochastic backward semigroups,the authors prove a dynamic programming principle for both the upper and the lower value functions of the game.The upper and the lower value functions are then shown to be the unique viscosity solutions of the Hamilton-Jacobi-Bellman-Isaacs equations with a double-obstacle.As a consequence,the uniqueness implies that the upper and lower value functions coincide and the game admits a value.
基金supported by the Agence Nationale de la Recherche (France), reference ANR-10-BLAN 0112the Marie Curie ITN "Controlled Systems", call: FP7-PEOPLE-2007-1-1-ITN, no. 213841-2+3 种基金supported by the National Natural Science Foundation of China (No. 10701050, 11071144)National Basic Research Program of China (973 Program) (No. 2007CB814904)Shandong Province (No. Q2007A04),Independent Innovation Foundation of Shandong Universitythe Project-sponsored by SRF for ROCS, SEM
文摘In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the dynamic programming principle for the upper and the lower value functions of this kind of stochastic differential games with reflection in a straightforward way. Then the upper and the lower value functions are proved to be the unique viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bettman-Isaacs equations with obstacles, respectively. The method differs significantly from those used for control problems with reflection, with new techniques developed of interest on its own. Further, we also prove a new estimate for RBSDEs being sharper than that in the paper of E1 Karoui, Kapoudjian, Pardoux, Peng and Quenez (1997), which turns out to be very useful because it allows us to estimate the LP-distance of the solutions of two different RBSDEs by the p-th power of the distance of the initial values of the driving forward equations. We also show that the unique viscosity solution to the approximating Isaacs equation constructed by the penalization method converges to the viscosity solution of the Isaacs equation with obstacle.
基金the National Natural Science Foundation of China(Nos.11272201,11132007 and 10802030)
文摘To enhance the reliability of the stochastically excited structure,it is significant to study the problem of stochastic optimal control for minimizing first-passage failure.Combining the stochastic averaging method with dynamical programming principle,we study the optimal control for minimizing first-passage failure of multidegrees-of-freedom(MDoF)nonlinear oscillators under Gaussian white noise excitations.The equations of motion of the controlled system are reduced to time homogenous difusion processes by stochastic averaging.The optimal control law is determined by the dynamical programming equations and the control constraint.The backward Kolmogorov(BK)equation and the Pontryagin equation are established to obtain the conditional reliability function and mean first-passage time(MFPT)of the optimally controlled system,respectively.An example has shown that the proposed control strategy can increase the reliability and MFPT of the original system,and the mathematical treatment is also facilitated.