期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
Preface 被引量:1
1
作者 Andrea Bertozzil Ron Fedkiw +5 位作者 Frederic Gibou Chiu-Yen Kao Chi-Wang Shu Richard Tsai wotao yin Hong-Kai Zhao 《Communications on Applied Mathematics and Computation》 EI 2024年第2期755-756,共2页
During April 20-22,2022,colleagues and friends gathered at the Institute of Pure&Applied Mathematics(IPAM),at the University of California at Los Angeles to celebrate Professor Stanley Osher's 8Oth birthday in... During April 20-22,2022,colleagues and friends gathered at the Institute of Pure&Applied Mathematics(IPAM),at the University of California at Los Angeles to celebrate Professor Stanley Osher's 8Oth birthday in a conference focusing on recent developments in"Optimization,Shape analysis,High-dimensional differential equations in science and Engineering,and machine learning Research(OSHER)"This conference hosted in-person talks by mathematicians,scientists,and industrial professionals worldwide.Those who could not attend extended their warm regards and expressed their appreciation for Professor Osher. 展开更多
关键词 Stanley gathered BIRTHDAY
下载PDF
Learning to optimize:A tutorial for continuous and mixed-integer optimization 被引量:1
2
作者 Xiaohan Chen Jialin Liu wotao yin 《Science China Mathematics》 SCIE CSCD 2024年第6期1191-1262,共72页
Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimiz... Learning to optimize(L2O)stands at the intersection of traditional optimization and machine learning,utilizing the capabilities of machine learning to enhance conventional optimization techniques.As real-world optimization problems frequently share common structures,L2O provides a tool to exploit these structures for better or faster solutions.This tutorial dives deep into L2O techniques,introducing how to accelerate optimization algorithms,promptly estimate the solutions,or even reshape the optimization problem itself,making it more adaptive to real-world applications.By considering the prerequisites for successful applications of L2O and the structure of the optimization problems at hand,this tutorial provides a comprehensive guide for practitioners and researchers alike. 展开更多
关键词 AI for mathematics(AI4Math) learning to optimize algorithm unrolling plug-and-play methods differentiable programming machine learning for combinatorial optimization(ML4CO)
原文传递
An alternating direction algorithm for matrix completion with nonnegative factors 被引量:24
3
作者 Yangyang XU wotao yin +1 位作者 Zaiwen WEN yin ZHANG 《Frontiers of Mathematics in China》 SCIE CSCD 2012年第2期365-384,共20页
This paper introduces an algorithm for the nonnegative matrix factorization-and-completion problem, which aims to find nonnegative low-rank matrices X and Y so that the product XY approximates a nonnegative data matri... This paper introduces an algorithm for the nonnegative matrix factorization-and-completion problem, which aims to find nonnegative low-rank matrices X and Y so that the product XY approximates a nonnegative data matrix M whose elements are partially known (to a certain accuracy). This problem aggregates two existing problems: (i) nonnegative matrix factorization where all entries of M are given, and (ii) low-rank matrix completion where non- negativity is not required. By taking the advantages of both nonnegativity and low-rankness, one can generally obtain superior results than those of just using one of the two properties. We propose to solve the non-convex constrained least-squares problem using an algorithm based on tile classical alternating direction augmented Lagrangian method. Preliminary convergence properties of the algorithm and numerical simulation results are presented. Compared to a recent algorithm for nonnegative matrix factorization, the proposed algorithm produces factorizations of similar quality using only about half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspeetral images, the proposed algorithm yields overall better qualities than those produced by two recent matrix-completion algorithms that do not exploit nonnegativity. 展开更多
关键词 nonnegative matrix factorization matrix completion alternating direction method hyperspectral unmixing
原文传递
PROXIMAL-PROXIMAL-GRADIENT METHOD 被引量:1
4
作者 Ernest K.Ryu wotao yin 《Journal of Computational Mathematics》 SCIE CSCD 2019年第6期778-812,共35页
In this paper,we present the proximal-proximal-gradient method(PPG),a novel optimization method that is simple to implement and simple to parallelize.PPG generalizes the proximal-gradient method and ADMM and is applic... In this paper,we present the proximal-proximal-gradient method(PPG),a novel optimization method that is simple to implement and simple to parallelize.PPG generalizes the proximal-gradient method and ADMM and is applicable to minimization problems written as a sum of many differentiable and many non-differentiable convex functions.The non-differentiable functions can be coupled.We furthermore present a related stochastic variation,which we call stochastic PPG(S-PPG).S-PPG can be interpreted as a generalization of Finito and MISO over to the sum of many coupled non-differentiable convex functions.We present many applications that can benefit from PPG and S-PPG and prove convergence for both methods.We demonstrate the empirical effectiveness of both methods through experiments on a CUDA GPU.A key strength of PPG and S-PPG is,compared to existing methods,their ability to directly handle a large sum of non-differentiable nonseparable functions with a constant stepsize independent of the number of functions.Such non-diminishing stepsizes allows them to be fast. 展开更多
关键词 Proximal-gradient ADMM Finito MISO SAGA Operator splitting Firstorder methods Distributed OPTIMIZATION Stochastic OPTIMIZATION ALMOST sure CONVERGENCE linear CONVERGENCE
原文传递
On the Convergence of Asynchronous Parallel Iteration with Unbounded Delays
5
作者 Zhimin Peng Yangyang Xu +1 位作者 Ming Yan wotao yin 《Journal of the Operations Research Society of China》 EI CSCD 2019年第1期5-42,共38页
Recent years have witnessed the surge of asynchronous parallel(asyncparallel)iterative algorithms due to problems involving very large-scale data and a large number of decision variables.Because of asynchrony,the iter... Recent years have witnessed the surge of asynchronous parallel(asyncparallel)iterative algorithms due to problems involving very large-scale data and a large number of decision variables.Because of asynchrony,the iterates are computed with outdated information,and the age of the outdated information,which we call delay,is the number of times it has been updated since its creation.Almost all recent works prove convergence under the assumption of a finite maximum delay and set their stepsize parameters accordingly.However,the maximum delay is practically unknown.This paper presents convergence analysis of an async-parallel method from a probabilistic viewpoint,and it allows for large unbounded delays.An explicit formula of stepsize that guarantees convergence is given depending on delays’statistics.With p+1 identical processors,we empirically measured that delays closely follow the Poisson distribution with parameter p,matching our theoretical model,and thus,the stepsize can be set accordingly.Simulations on both convex and nonconvex optimization problems demonstrate the validness of our analysis and also show that the existing maximum-delay-induced stepsize is too conservative,often slows down the convergence of the algorithm. 展开更多
关键词 Asynchronous unbounded delays NONCONVEX CONVEX
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部