期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
模糊图像盲复原的鲁棒自适应滤波算法 被引量:4
1
作者 王芳 李谊 +1 位作者 陆建峰 杨静宇 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2014年第3期457-464,共8页
运动模糊图像盲复原是图像处理中的关键问题之一.由于模糊信息估计的复杂性以及图像噪声的影响,现有算法往往难以做到高质量的图像复原.为改善模糊信息估计的效果,提出一种基于自适应线性滤波的改进算法.首先在原有模糊信息估计过程中... 运动模糊图像盲复原是图像处理中的关键问题之一.由于模糊信息估计的复杂性以及图像噪声的影响,现有算法往往难以做到高质量的图像复原.为改善模糊信息估计的效果,提出一种基于自适应线性滤波的改进算法.首先在原有模糊信息估计过程中引入自适应动态线性滤波以抑制噪声影响,达到改善模糊信息估计结果的目的,同时可以起到调整优化目标的作用,使原问题变得较容易求解,从而获得高质量的模糊信息估计;在此基础上提出了改进的重定权值split Bregman迭代法,用于获得模糊信息后求解原始图像的过程中,进一步改善模糊图像复原的效果.实验结果表明,与3种现有的模糊图像盲复原算法相比,该算法能更准确地估计模糊信息,对多数图像复原任务具有更好的鲁棒性,能有效地用于运动模糊图像复原任务. 展开更多
关键词 图像盲复原 正则化方法 L1范数优化 线性滤波 split Bregman迭代
下载PDF
Probabilistic models of vision and max-margin methods
2
作者 Alan YUILLE Xuming HE 《Frontiers of Electrical and Electronic Engineering in China》 CSCD 2012年第1期94-106,共13页
It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, an... It is attractive to formulate problems in computer vision and related fields in term of probabilis- tic estimation where the probability models are defined over graphs, such as grammars. The graphical struc- tures, and the state variables defined over them, give a rich knowledge representation which can describe the complex structures of objects and images. The proba- bility distributions defined over the graphs capture the statistical variability of these structures. These proba- bility models can be learnt from training data with lim- ited amounts of supervision. But learning these models suffers from the difficulty of evaluating the normaliza- tion constant, or partition function, of the probability distributions which can be extremely computationally demanding. This paper shows that by placing bounds on the normalization constant we can obtain compu- rationally tractable approximations. Surprisingly, for certain choices of loss functions, we obtain many of the standard max-margin criteria used in support vector machines (SVMs) and hence we reduce the learning to standard machine learning methods. We show that many machine learning methods can be obtained in this way as approximations to probabilistic methods including multi-class max-margin, ordinal regression, max-margin Markov networks and parsers, multiple- instance learning, and latent SVM. We illustrate this work by computer vision applications including image labeling, object detection and localization, and motion estimation. We speculate that rained by using better bounds better results can be ob- and approximations. 展开更多
关键词 structured prediction max-margin learn- ing probabilistic models loss function
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部