期刊文献+

基于生物激励计算模型在图像显著性提取中的研究 被引量:1

Research on biologically-inspired computional model for image saliency detection
下载PDF
导出
摘要 提出一种生物激励的显著性特征计算模型。首先通过注意块学习从眼动数据库中选择与视觉响应一致的稀疏基;然后基于稀疏基表达原理对图像建立计算模型并提取显著性特征:包括全局连续性、区域颜色对比以及局部复杂度对比特征;再仿照细胞调节原理,提出新的特征组合方法进行特征融合。最后将该算法在多个典型的场景中对感兴趣区进行提取实验,证明比其他算法具有优越性。并提出将此算法应用于虚拟与现实场景融合中,能良好地提取出真实场景中的有效区域和剔除虚景区域。 A biologically-inspired model for the computer vision community was proposed. At first, a set of basis functions that accorded with visual responses to natural stimuli was learned by using eye-fixation patches from an eye-tracking dataset. Then image calculation model was established and features was derived based on the principle of sparse representation: including global continuity, regional color contrast, and local complexity contrast. And then refer to the principle that activity in cells responding to stimuli, a new feature combination theory was proposed to achieve features fusion. Afterwards, some experiments extracting regions of interest from typical scenes prove that this algorithm has superiontity than other algorithms, and the algorithm was applied in virtual and reality interactivity. It can effectively extract effective regions and eliminate virtual scene area.
出处 《红外与激光工程》 EI CSCD 北大核心 2013年第3期823-828,共6页 Infrared and Laser Engineering
基金 国家自然科学基金(40905011)
关键词 机器视觉 显著性提取 生物激励 稀疏表达 machine vision saliency detection biologically-inspired sparse representaion
  • 相关文献

参考文献20

  • 1Koene A R, Zhaoping L. Feature-specific interactions in Salience from Combined Feature Contrasts: Evidence for a Bottom-Up Saliency Map in V1 [J]. Journal of Vision, 2007, 7(7): 1-14.
  • 2Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254-1259.
  • 3Achanta R, Hemami S, Estrada F, et al. Frequency-tuned salient region detection [C]//IEEE Conference on Computer Vision and Pattern Recognition, 2009: 1597-1604.
  • 4Goferaman S, Zelnik L, Tal A. Context aware saliency detection [C]//IEEE Conference on Computer Vision and Pattern Recognition, 2010: 2376-2383.
  • 5Hou X, Zhang L. Saliency Detection: A Spectral Residual Approach [C]//IEEE Conference on Computer Vision and Pattern Recognition, 2007: 1-8.
  • 6Wang W, Wang Y, Huang Q, et al. Measuring visual saliency by site entropy rate [C]//IEEE Conference on Computer Vision and Pattern Recognition, 2010: 135-143.
  • 7Hou X, Zhang L. Dynamic visual attention: searching for coding length increments[C]//Advances in Neural Information Processing Systems(NIPS), 2008: 35-41.
  • 8Lin Y, Fang B, Yuanyan T. A computational model for saliency maps by using local entropy [C]//Proceedings of Twenty-Fourth AAAI Conference of Artificial Intelligence, 2010: 967-973.
  • 9杨亚威,李俊山,杨威,赵方舟.利用稀疏化生物视觉特征的多类多视角目标检测方法[J].红外与激光工程,2012,41(1):267-272. 被引量:1
  • 10Cheng M M, Zhang G X, Mitra N J. Global contrast based salient region detection [C]//IEEE Conference on Computer Vision and Pattern Recognition, 2011: 409-416.

二级参考文献28

  • 1宿丁,张启衡,陶冰洁,谢盛华.复杂背景下多源多目标图像的分形分割算法[J].红外与激光工程,2007,36(3):387-390. 被引量:16
  • 2Lee S, Kim K, Kim J Y, et al. Familiarity based unified visual attention model for fast and robust object recognition [J]. Pattern Recognition, 2009, 43(3): 1116-1128.
  • 3Kaplan L M. Improved SAR target detection via extended fractal features[J]. IEEE Trans AES, 2001, 37(2): 436-451.
  • 4Kuncheva L I, Bezgek J C, Duin R P W. Decision templates for multiple classifier fusion: an experimental comparison[J]. Pattern Recognition, 2001, 34(2): 299-314.
  • 5Gao G, Kuang G Y, Zhang Q, et al. Fast detecting and locating groups of targets in high-resolution SAR images[J]. Pattern Recognition, 2007, 40(4): 1378-1384.
  • 6Lowe D G. Object recogniti on from local scale-invariant features [C]// International Conference on Computer Vision, 1999: 1150-1157.
  • 7Papageorgiou C, Poggio T. A trainable system for object detection[J].Intl J Computer Vision, 2000, 38(1): 15-33.
  • 8Heisele B, Serre T, Mukherjee S, et al. Feature reduction and hierarchy of classifiers for fast object detection in video images [C]// IEEE Conference on Computer Vision and Pattern Recognition, 2001: 18-24.
  • 9Fergus R, Perona P, Zisserman A. Object class recognition by unsupervised scale-invariant learning[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2003: 264-271.
  • 10Schneiderman H, Kanade T. A statistical model for 3D object detection applied to faces and cars [C]// IEEE Conference on Computer Vision and Pattern Recognition, 2000: 746-751.

共引文献8

同被引文献16

  • 1Bose E, Pepion R, Le Callet P, et al. Towards a new quality metric for 3-D synthesized view assessment [JI. IEEE Journal of Selected Topics in Signal Processing, 2011. 5 (7): 1332-1343,.
  • 2Hu S. Kwong S, Zhang Y, et al. Rate-distortion optimized rate control for depth map-based 3-D video coding [Jl. IEEE Transactions on Image Processing, 2017,, 22 (2): 585-594.
  • 3Grau O, Borel T, Kauff P, et al. 3D-TV R&D activities in europe [J]. IEEE Transactions on Broadcasting, 2011, 57 (2): 408-420.
  • 4Holliman N S, Dodgson N A, Favalora G E, et al. Three- dimensional displays: a review and applications analysis[J].IEEE Transactions on Broadcasting, 2011, 57 (2): 362- 371.
  • 5Dodgson N A. Autostereoscopic 3D displays [I]. Computer, 2005, 38(8): 31-36.
  • 6Cellatoglu A, Balasubramanian K. Autostereoscopic imaging techniques for 3D TV: proposals for improvements [J]. Journal of Display Technology, 2013, 9(8): 666-672.
  • 7Woods A, Docherty T, Koch R. Image distortions in stereoscopic video systems [C]//SPIE, Stereoscopic Displays and Applications, 1993: 36-48.
  • 8Chamaret C, Godeffroy S, Lopez P, et al. Adaptive 3D rendering based on region-of-interest[C]//SPIE, Stereoscopic Displays and Applications XXI, 2010, 7524: 1-12.
  • 9Kwon K, Lira Y, Kim N. Vergence control of binocular stereoscopic camera using disparity information [J]. Journal of the Optical Society of Korea, 2009, 13(3): 379-385.
  • 10Lei J, Zhang H, Hou C, et al. Segmentation-based adaptive vergence control for parallel multiview stereoscopic images [J]. Optik-lnternational Journal for Light and Electron Opt/cs, 2013, 124(15): 2097-2100.

引证文献1

二级引证文献6

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部