期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
An Efficient Smoothing and Thresholding Image Segmentation Framework with Weighted Anisotropic-lsotropicTotalVariation
1
作者 Kevin Bui Yifei Lou +1 位作者 Fredrick Park jack xin 《Communications on Applied Mathematics and Computation》 EI 2024年第2期1369-1405,共37页
In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of... In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of two stages:smoothing and thresholding,thus referred to as smoothing-and-thresholding(SaT).In the first stage,a smoothed image is obtained by an AITV-regularized Mumford-Shah(MS)model,which can be solved efficiently by the alternating direction method of multipliers(ADMMs)with a closed-form solution of a proximal operator of the l_(1)-αl_(2) regularizer.The convergence of the ADMM algorithm is analyzed.In the second stage,we threshold the smoothed image by K-means clustering to obtain the final segmentation result.Numerical experiments demonstrate that the proposed segmentation framework is versatile for both grayscale and color images,effcient in producing high-quality segmentation results within a few seconds,and robust to input images that are corrupted with noise,blur,or both.We compare the AITV method with its original convex TV and nonconvex TVP(O<p<1)counterparts,showcasing the qualitative and quantitative advantages of our proposed method. 展开更多
关键词 Image segmentation Non-convex optimization Mumford-Shah(MS)model Alternating direction method of multipliers(ADMMs) Proximal operator
下载PDF
Convergence of Hyperbolic Neural Networks Under Riemannian Stochastic Gradient Descent
2
作者 Wes Whiting Bao Wang jack xin 《Communications on Applied Mathematics and Computation》 EI 2024年第2期1175-1188,共14页
We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a ... We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a Riemannian version of the Adam algorithm.We show numerical simulations of these algorithms on various benchmarks. 展开更多
关键词 Hyperbolic neural network Riemannian gradient descent Riemannian Adam(RAdam) Training convergence
下载PDF
QUANTIZATION AND TRAINING OF LOW BIT-WIDTH CONVOLUTIONAL NEURAL NETWORKS FOR OBJECT DETECTION 被引量:2
3
作者 Penghang Yin Shuai Zhang +1 位作者 Yingyong Qi jack xin 《Journal of Computational Mathematics》 SCIE CSCD 2019年第3期349-359,共11页
We presen t LBW-Net,an efficient optimization based method for qua nt ization and training of the low bit-width convolutional neural networks(CNNs).Specifically,we quantize the weights to zero or powers of 2 by minimi... We presen t LBW-Net,an efficient optimization based method for qua nt ization and training of the low bit-width convolutional neural networks(CNNs).Specifically,we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during backpropagation(weight learning).We characterize the combinatorial nature of the low bit-width quantization problem.For 2-bit(ternary)CNNs,the quantization of N weights can be done by an exact formula in O(N log N)complexity.When the bit-width is 3 and above,we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive.The free parameter is further determined by network retraining and object detection tests.The LBW-Net has several desirable advantages over full-precision CNNs,including considerable memory savings,energy efficiency,and faster deployment.Our experiments on PASCAL VOC dataset show that compared with its 32-bit floating-point counterpart,the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks,and can even do better in real world visual scenes,while empirically enjoying more than 4× faster deployment. 展开更多
关键词 QUANTIZATION LOW BIT WIDTH deep neural networks Exact and approximate analytical FORMULAS Network training Object detection
原文传递
Deep Learning for Real-Time Crime Forecasting and Its Ternarization 被引量:2
4
作者 Bao WANG Penghang YIN +3 位作者 Andrea Louise BERTOZZI P.Jeffrey BRANTINGHAM Stanley Joel OSHER jack xin 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2019年第6期949-966,共18页
Real-time crime forecasting is important.However,accurate prediction of when and where the next crime will happen is difficult.No known physical model provides a reasonable approximation to such a complex system.Histo... Real-time crime forecasting is important.However,accurate prediction of when and where the next crime will happen is difficult.No known physical model provides a reasonable approximation to such a complex system.Historical crime data are sparse in both space and time and the signal of interests is weak.In this work,the authors first present a proper representation of crime data.The authors then adapt the spatial temporal residual network on the well represented data to predict the distribution of crime in Los Angeles at the scale of hours in neighborhood-sized parcels.These experiments as well as comparisons with several existing approaches to prediction demonstrate the superiority of the proposed model in terms of accuracy.Finally,the authors present a ternarization technique to address the resource consumption issue for its deployment in real world.This work is an extension of our short conference proceeding paper[Wang,B.,Zhang,D.,Zhang,D.H.,et al.,Deep learning for real time Crime forecasting,2017,ar Xiv:1707.03340]. 展开更多
关键词 Crime representation Spatial-temporal deep learning Real-time forecasting Ternarization
原文传递
ITERATIVE l1 MINIMIZATION FOR NON-CONVEX COMPRESSED SENSING 被引量:2
5
作者 Penghang Yin jack xin 《Journal of Computational Mathematics》 SCIE CSCD 2017年第4期439-451,共13页
An algorithmic framework, based on the difference of convex functions algorithm (D- CA), is proposed for minimizing a class of concave sparse metrics for compressed sensing problems. The resulting algorithm iterates... An algorithmic framework, based on the difference of convex functions algorithm (D- CA), is proposed for minimizing a class of concave sparse metrics for compressed sensing problems. The resulting algorithm iterates a sequence ofl1 minimization problems. An exact sparse recovery theory is established to show that the proposed framework always improves on the basis pursuit (l1 minimization) and inherits robustness from it. Numerical examples on success rates of sparse solution recovery illustrate further that, unlike most existing non-convex compressed sensing solvers in the literature, our method always out- performs basis pursuit, no matter how ill-conditioned the measurement matrix is. Moreover, the iterative l1 (ILl) algorithm lead by a wide margin the state-of-the-art algorithms on l1/2 and logarithimic minimizations in the strongly coherent (highly ill-conditioned) regime, despite the same objective functions. Last but not least, in the application of magnetic resonance imaging (MRI), IL1 algorithm easily recovers the phantom image with just 7 line projections. 展开更多
关键词 Compressed sensing Non-convexity Difference of convex functions algorithm Iterative l1 minimization.
原文传递
A TIME DOMAIN BLIND DECORRELATION METHOD OF CONVOLUTIVE MIXTURES BASED ON AN IIR MODEL
6
作者 Jie Liu jack xin Yingyong Qi 《Journal of Computational Mathematics》 SCIE CSCD 2010年第3期371-385,共15页
We study a time domain decorrelation method of source signal separation from convolutive sound mixtures based on an infinite impulse response (IIR) model. The IIR model uses fewer parameters to capture the physical ... We study a time domain decorrelation method of source signal separation from convolutive sound mixtures based on an infinite impulse response (IIR) model. The IIR model uses fewer parameters to capture the physical mixing process and is useful for finding low dimensional separating solutions. We present inversion formulas to decorrelate the mixture signals and derive filter equations involving second order time lagged statistics of mixtures. We then formulate an 11 constrained minimization problem and solve it by an iterative method. Numerical experiments on recorded sound mixtures show that our method is capable of sound separation in low dimensional parameter spaces with good perceptual quality and low correlation coefficient comparable to the known infomax method. 展开更多
关键词 Blind Decorrelation Convolutive Mixtures IIR Modeling l1 ConstrainedMinimization
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部