This paper develops and analyzes a stochastic derivative-free optimization strategy.A key feature is the state-dependent adaptive variance.We prove global convergence in probability with algebraic rate and give the qu...This paper develops and analyzes a stochastic derivative-free optimization strategy.A key feature is the state-dependent adaptive variance.We prove global convergence in probability with algebraic rate and give the quantitative results in numerical examples.A striking fact is that convergence is achieved without explicit information of the gradient and even without comparing different objective function values as in established methods such as the simplex method and simulated annealing.It can otherwise be compared to annealing with state-dependent temperature.展开更多
基金partially supported by the National Science Foundation through grants DMS-2208504(BE),DMS-1913309(KR),DMS-1937254(KR),and DMS-1913129(YY)support from Dr.Max Rossler,the Walter Haefner Foundation,and the ETH Zurich Foundation.
文摘This paper develops and analyzes a stochastic derivative-free optimization strategy.A key feature is the state-dependent adaptive variance.We prove global convergence in probability with algebraic rate and give the quantitative results in numerical examples.A striking fact is that convergence is achieved without explicit information of the gradient and even without comparing different objective function values as in established methods such as the simplex method and simulated annealing.It can otherwise be compared to annealing with state-dependent temperature.