3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices. This limitation maps only patial view of the problem to the device space at...3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices. This limitation maps only patial view of the problem to the device space at a time, and results in tedious and un natural interface of control. This paper uses the DataGlove interface for modeling and animating scene behaviors. The modeling interface selects, scales, rotates, translates,copies and deletes the instances of the prindtives. These basic modeling processes are directly performed in the task spacet using hand shapes and motions. Hand shapes are recoginzed as discrete states that trigger the commands, and hand motion are mapped to the movement of a selected instance. The interactions through hand interface place the user as a participant in the process of behavior simulation. Both event triggering and role switching of hand are experimented in simulation. The event mode of hand triggers control signals or commands through a menu interface. The object mode of hand simulates itself as an object whose appearance or motion inlluences the motions of other objects in scene. The involvement of hand creates a diversity of dyndric situations for testing variable scene behaviors. Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space.展开更多
文摘3-D task space in modeling and animation is usually reduced to the separate control dimensions supported by conventional interactive devices. This limitation maps only patial view of the problem to the device space at a time, and results in tedious and un natural interface of control. This paper uses the DataGlove interface for modeling and animating scene behaviors. The modeling interface selects, scales, rotates, translates,copies and deletes the instances of the prindtives. These basic modeling processes are directly performed in the task spacet using hand shapes and motions. Hand shapes are recoginzed as discrete states that trigger the commands, and hand motion are mapped to the movement of a selected instance. The interactions through hand interface place the user as a participant in the process of behavior simulation. Both event triggering and role switching of hand are experimented in simulation. The event mode of hand triggers control signals or commands through a menu interface. The object mode of hand simulates itself as an object whose appearance or motion inlluences the motions of other objects in scene. The involvement of hand creates a diversity of dyndric situations for testing variable scene behaviors. Our experiments have shown the potential use of this interface directly in the 3-D modeling and animation task space.