摘要
当前的虚拟运动目标人机交互方法体感特征单一,无法高精度识别和跟踪目标,导致虚拟手型逼真度偏低。提出新的基于多体感融合的虚拟运动目标人机交互方法。利用Kinect采集深度图像和彩色图像,利用颜色直方图匹配结果识别虚拟运动目标。提取虚拟运动目标结构信息,融合目标全部体感特征,跟踪指定目标。分析阻尼理论对人机交互数据稳定性的影响,构建笛卡尔空间映射关系,实现虚拟运动目标人机交互。仿真结果表明,所提方法能够获取高逼真度的虚拟手型,同时还能够准确跟踪和识别目标,充分证明所提方法的实用性。
The current human-computer interaction methods of virtual moving targets have single somatosensory characteristics and can not identify and track targets with high precision, resulting in low fidelity of virtual hand. Therefore, a novel human-computer interaction method of virtual moving target based on multi body sensing fusion is put forward in this paper. Depth images and color images were collected by Kinect. The color histogram was used to match the results to identify the virtual moving target. Concurrently, all the somatosensory features of the target were combined to track the specified target. The influence of damping theory on the stability of human-computer interaction data was investigated in detail. Cartesian space mapping relationship was founded to realize human-computer interaction of virtual moving targets. The simulation results show that this method can not only obtain high fidelity virtual hand, but also accurately track and identify targets, indicating that this method has excellent practicability.
作者
龙年
刘智惠
LONG Nian;LIU Zhi-hui(Engineering and Technology College,Hubei University of Technology,Hubei Wuhan 430000,China)
出处
《计算机仿真》
北大核心
2022年第6期201-205,共5页
Computer Simulation
关键词
多体感融合
虚拟运动目标
人机交互
特征提取
目标跟踪
Multi-somatosensory fusion
Virtual moving target
Human-computer interaction
Feature extraction
Target tracking