Understanding how signal properties are optimized for the reliable transmission of information requires accurate de- scription of the signal in time and space. For movement-based signals where movement is restricted t...Understanding how signal properties are optimized for the reliable transmission of information requires accurate de- scription of the signal in time and space. For movement-based signals where movement is restricted to a single plane, measure- ments from a single viewpoint can be used to consider a range of viewing positions based on simple geometric calculations. However, considerations of signal properties from a range of viewing positions for movements extending into three-dimensions (3D) are more problematic. We present here a new framework that overcomes this limitation, and enables us to quantify the extent to which movement-based signals are view-specific. To illustrate its application, a Jacky lizard tail flick signal was filmed with synchronized cameras and the position of the tail tip digitized for both recordings. Camera aligmnent enabled tl^e construction of a 3D display action pattern profile. We analyzed the profile directly and used it to create a detailed 3D animation. In the virtual environment, we were able to film the same signal from multiple viewing positions and using a computational motion analysis algorithm (gradient detector model) to measure local image velocity in order to predict view dependent differences in signal properties. This approach will enable consideration of a range of questions concerning movement-based signal design and evolu- tion that were previously out of reach [Current Zoology 56 (3): 327-336, 2010].展开更多
文摘Understanding how signal properties are optimized for the reliable transmission of information requires accurate de- scription of the signal in time and space. For movement-based signals where movement is restricted to a single plane, measure- ments from a single viewpoint can be used to consider a range of viewing positions based on simple geometric calculations. However, considerations of signal properties from a range of viewing positions for movements extending into three-dimensions (3D) are more problematic. We present here a new framework that overcomes this limitation, and enables us to quantify the extent to which movement-based signals are view-specific. To illustrate its application, a Jacky lizard tail flick signal was filmed with synchronized cameras and the position of the tail tip digitized for both recordings. Camera aligmnent enabled tl^e construction of a 3D display action pattern profile. We analyzed the profile directly and used it to create a detailed 3D animation. In the virtual environment, we were able to film the same signal from multiple viewing positions and using a computational motion analysis algorithm (gradient detector model) to measure local image velocity in order to predict view dependent differences in signal properties. This approach will enable consideration of a range of questions concerning movement-based signal design and evolu- tion that were previously out of reach [Current Zoology 56 (3): 327-336, 2010].