We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve...We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve error bounds by presenting a new comparison theorem associated with general convex loss functions and Tsybakov noise conditions. Some concrete examples are provided to illustrate the improved learning rates which demonstrate the effect of various loss functions for learning algorithms. In our analysis, the convexity of the loss functions plays a central role.展开更多
The transition to turbulence in flows where the laminar profile is linearly stable requires perturbations of finite amplitude. "Optimal" perturbations are distinguished as extrema of certain functionals, and differe...The transition to turbulence in flows where the laminar profile is linearly stable requires perturbations of finite amplitude. "Optimal" perturbations are distinguished as extrema of certain functionals, and different functionals give different optima. We here discuss the phase space structure of a 2D simplified model of the transition to turbulence and discuss optimal perturbations with respect to three criteria: energy of the initial condition, energy dissipation of the initial condition, and amplitude of noise in a stochastic transition. We find that the states triggering the transition are different in the three cases, but show the same scaling with Reynolds number.展开更多
文摘We continue our study on classification learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and general convex loss functions. Our main purpose of this paper is to improve error bounds by presenting a new comparison theorem associated with general convex loss functions and Tsybakov noise conditions. Some concrete examples are provided to illustrate the improved learning rates which demonstrate the effect of various loss functions for learning algorithms. In our analysis, the convexity of the loss functions plays a central role.
基金supported in part by the German Research Foundation within FOR 1182
文摘The transition to turbulence in flows where the laminar profile is linearly stable requires perturbations of finite amplitude. "Optimal" perturbations are distinguished as extrema of certain functionals, and different functionals give different optima. We here discuss the phase space structure of a 2D simplified model of the transition to turbulence and discuss optimal perturbations with respect to three criteria: energy of the initial condition, energy dissipation of the initial condition, and amplitude of noise in a stochastic transition. We find that the states triggering the transition are different in the three cases, but show the same scaling with Reynolds number.