摘要
在研究透镜成像模型与针孔成像模型的基础上,提出了一种基于亮度和深度信息的实时景深渲染算法,首次把景深渲染应用到增强现实系统。使用OpenGL和Visual Studio2008实现整个算法。通过建立三维空间坐标系,实时追踪和获取真实场景深度信息,把生成虚拟物体融入到真实场景中。随后,依据亮度信息对虚实融合场景进行模糊处理。最后进行图像融合,把模拟的景深效果输出到屏幕。实验结果表明,新算法更接近于真实摄像机拍摄的景深效果图,适用于增强现实系统。
According to the luminance and depth information, a real-time depth of field rendering algorithm of Augmented Reality was proposed based on the research of lens image and ideal camera model. The algorithm with OpenGL and VisualStudio2008 were realized. At first, virtual objects were merged into the real scenes by establishing three-dimensional coordinate system for real-time tracking and access to the real scene depth information. Next, the merged image was rendered, which depended on the brightness and the depth of the scene. Last, the images were merged and the simulated effect of depth was output to the screen. The result shows that the algorithm is much closer to reality, and it can be used to the AR system.
出处
《系统仿真学报》
CAS
CSCD
北大核心
2012年第8期1612-1617,共6页
Journal of System Simulation
基金
国家科技支撑计划课题(2006BAK13B10)
上海市国际科技合作基金项目(09510700900)
上海市重点学科建设项目(J50103)
上海市科学技术委员会资助项目(115115034400)
关键词
景深
实时性
增强现实
亮度信息
depth of field
real-time
Augmented Reality
luminance information