期刊文献+

Hand Interface in Traditional Modeling and Animation Tasks

Hand Interface in Traditional Modeling and Animation Tasks
原文传递
导出
摘要 3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices. This limitation maps only patial view of the problem to the device space at a time, and results in tedious and un natural interface of control. This paper uses the DataGlove interface for modeling and animating scene behaviors. The modeling interface selects, scales, rotates, translates,copies and deletes the instances of the prindtives. These basic modeling processes are directly performed in the task spacet using hand shapes and motions. Hand shapes are recoginzed as discrete states that trigger the commands, and hand motion are mapped to the movement of a selected instance. The interactions through hand interface place the user as a participant in the process of behavior simulation. Both event triggering and role switching of hand are experimented in simulation. The event mode of hand triggers control signals or commands through a menu interface. The object mode of hand simulates itself as an object whose appearance or motion inlluences the motions of other objects in scene. The involvement of hand creates a diversity of dyndric situations for testing variable scene behaviors. Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space. 3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices. This limitation maps only patial view of the problem to the device space at a time, and results in tedious and un natural interface of control. This paper uses the DataGlove interface for modeling and animating scene behaviors. The modeling interface selects, scales, rotates, translates,copies and deletes the instances of the prindtives. These basic modeling processes are directly performed in the task spacet using hand shapes and motions. Hand shapes are recoginzed as discrete states that trigger the commands, and hand motion are mapped to the movement of a selected instance. The interactions through hand interface place the user as a participant in the process of behavior simulation. Both event triggering and role switching of hand are experimented in simulation. The event mode of hand triggers control signals or commands through a menu interface. The object mode of hand simulates itself as an object whose appearance or motion inlluences the motions of other objects in scene. The involvement of hand creates a diversity of dyndric situations for testing variable scene behaviors. Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space.
作者 孙汉秋
出处 《Journal of Computer Science & Technology》 SCIE EI CSCD 1996年第3期286-295,共10页 计算机科学技术学报(英文版)
关键词 Interactive graphics scene modeling behavior simulation DataGlove virtual reality Interactive graphics,scene modeling, behavior simulation, DataGlove,virtual reality
  • 相关文献

参考文献2

  • 1孙汉秋,Virtual Reality Applications,1994年
  • 2Liang J,Proc UIST’91 ACM Symp on User Interface Software and Technology,1991年

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部