期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
QUANTIZATION AND TRAINING OF LOW BIT-WIDTH CONVOLUTIONAL NEURAL NETWORKS FOR OBJECT DETECTION 被引量:2
1
作者 penghang yin Shuai Zhang +1 位作者 yingyong Qi Jack Xin 《Journal of Computational Mathematics》 SCIE CSCD 2019年第3期349-359,共11页
We presen t LBW-Net,an efficient optimization based method for qua nt ization and training of the low bit-width convolutional neural networks(CNNs).Specifically,we quantize the weights to zero or powers of 2 by minimi... We presen t LBW-Net,an efficient optimization based method for qua nt ization and training of the low bit-width convolutional neural networks(CNNs).Specifically,we quantize the weights to zero or powers of 2 by minimizing the Euclidean distance between full-precision weights and quantized weights during backpropagation(weight learning).We characterize the combinatorial nature of the low bit-width quantization problem.For 2-bit(ternary)CNNs,the quantization of N weights can be done by an exact formula in O(N log N)complexity.When the bit-width is 3 and above,we further propose a semi-analytical thresholding scheme with a single free parameter for quantization that is computationally inexpensive.The free parameter is further determined by network retraining and object detection tests.The LBW-Net has several desirable advantages over full-precision CNNs,including considerable memory savings,energy efficiency,and faster deployment.Our experiments on PASCAL VOC dataset show that compared with its 32-bit floating-point counterpart,the performance of the 6-bit LBW-Net is nearly lossless in the object detection tasks,and can even do better in real world visual scenes,while empirically enjoying more than 4× faster deployment. 展开更多
关键词 QUANTIZATION LOW BIT WIDTH deep neural networks Exact and approximate analytical FORMULAS Network training Object detection
原文传递
Deep Learning for Real-Time Crime Forecasting and Its Ternarization 被引量:2
2
作者 Bao WANG penghang yin +3 位作者 Andrea Louise BERTOZZI P.Jeffrey BRANTINGHAM Stanley Joel OSHER Jack XIN 《Chinese Annals of Mathematics,Series B》 SCIE CSCD 2019年第6期949-966,共18页
Real-time crime forecasting is important.However,accurate prediction of when and where the next crime will happen is difficult.No known physical model provides a reasonable approximation to such a complex system.Histo... Real-time crime forecasting is important.However,accurate prediction of when and where the next crime will happen is difficult.No known physical model provides a reasonable approximation to such a complex system.Historical crime data are sparse in both space and time and the signal of interests is weak.In this work,the authors first present a proper representation of crime data.The authors then adapt the spatial temporal residual network on the well represented data to predict the distribution of crime in Los Angeles at the scale of hours in neighborhood-sized parcels.These experiments as well as comparisons with several existing approaches to prediction demonstrate the superiority of the proposed model in terms of accuracy.Finally,the authors present a ternarization technique to address the resource consumption issue for its deployment in real world.This work is an extension of our short conference proceeding paper[Wang,B.,Zhang,D.,Zhang,D.H.,et al.,Deep learning for real time Crime forecasting,2017,ar Xiv:1707.03340]. 展开更多
关键词 Crime representation Spatial-temporal deep learning Real-time forecasting Ternarization
原文传递
ITERATIVE l1 MINIMIZATION FOR NON-CONVEX COMPRESSED SENSING 被引量:2
3
作者 penghang yin Jack Xin 《Journal of Computational Mathematics》 SCIE CSCD 2017年第4期439-451,共13页
An algorithmic framework, based on the difference of convex functions algorithm (D- CA), is proposed for minimizing a class of concave sparse metrics for compressed sensing problems. The resulting algorithm iterates... An algorithmic framework, based on the difference of convex functions algorithm (D- CA), is proposed for minimizing a class of concave sparse metrics for compressed sensing problems. The resulting algorithm iterates a sequence ofl1 minimization problems. An exact sparse recovery theory is established to show that the proposed framework always improves on the basis pursuit (l1 minimization) and inherits robustness from it. Numerical examples on success rates of sparse solution recovery illustrate further that, unlike most existing non-convex compressed sensing solvers in the literature, our method always out- performs basis pursuit, no matter how ill-conditioned the measurement matrix is. Moreover, the iterative l1 (ILl) algorithm lead by a wide margin the state-of-the-art algorithms on l1/2 and logarithimic minimizations in the strongly coherent (highly ill-conditioned) regime, despite the same objective functions. Last but not least, in the application of magnetic resonance imaging (MRI), IL1 algorithm easily recovers the phantom image with just 7 line projections. 展开更多
关键词 Compressed sensing Non-convexity Difference of convex functions algorithm Iterative l1 minimization.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部