Presentation on theme: "Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios."— Presentation transcript:
Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios invovled a single person walking, jogging and dancing. A representation of Heider and Simmel (Nevarez & Scholl, 2000). 4 visual display conditions: Real, Body Silhouette, Pulsing Block, Block. 2 tasks: Free response & Self-propulsion rating 32 subjects used in a between design: 8 per condition. Creating Animacy Displays from Scenes of Human Action 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri, 1 Kirsty Smith, 1 Helena Paterson, 1 Frank E. Pollick 1 Department of Psychology, University of Glasgow, Glasgow, Scotland, 2 Infomus Lab, D.I.S.T, University of Genova, Genova, Italy Introduction Heider and Simmel (1944) showed that people, on viewing a simple animation involving geometric shapes (a disc, a large triangle and a small triangle), would attribute emotions and intentions to the shapes based on their movements. Further experiments have used the technique of varying simple mathematical relationships of the motion of the shapes. These experiments have shown that the attribution of animacy is largely due to changes in speed and direction of the shapes, rather than characteristic features. We introduce a new method to create animacy displays direct from actual human movements. We examined the perception of animacy using transformed displays of human actions, and it was our intent that this would allow new insights as to how visual cues lead to spontaneous use of animate terms and the attribution of social meaning. Conclusions Using this new technique for stimulus generation from real video footage, we were able to create an abstract display, involving geometric shapes, that resulted in animate terms being used to describe them. Viewpoint appears to influence the rating of self-propulsion, with displays from the top-view being rated higher than those from the side-view. References Camurri A., Mazzarino B. & Volpe G. Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library, in A. Camurri, G. Volpe (Eds.), Gesture-based Communication in Human-Computer Interaction, LNAI 2915, Springer Verlag. (2004). Heider, F., & Simmel, M., An Experimental Study of Apparent Behavior, American Journal of Psychology, 57:2, (1944). McAleer, P., Mazzarino, B., Volpe, G., Camurri, A., Smith, K., Paterson, H., Pollick, F.E., Perceiving Animacy and Arousal in Transformed Displays of Human Interaction. Proceedings for ISHF_MCM_2004, in press. Available from Nevarez, H.G., & Scholl, B.J., (2000).http://research.yale.edu/perception/animacy/HS-Blocks-QT.mov Judgements of Animacy The scenario depicted (right) resulted in the largest occurrence of animate terms, after the Heider and Simmel display. Stimulus Production 1. Real Video – original footage. 2.Body Silhouette – obtained by the removal of colour information and applying a background subtraction technique to the input video. 3.Pulsing Block(s) – movement of a person is represented by a rectangle, the size of which is related to the Quantity of Motion (QoM) as measured by algorithms included in the EyesWeb Expressive Gesture Processing Library. QoM is computed as the change in area of the person, in the silhouette format, from one frame to the next, summed on the last few frames (4 frames in this experiment). QoM can be assumed as a measure of the global amount of detected motion and it can be thought of as a first rough approximation of the physical momentum. 4.Block(s) – uses techniques for the tracking of the centre of mass of the silhouette image for each respective person. The dimensions of the block for each person in this condition were constant. The experiment involved creating four displays of each scene but with decreasing amounts of visual information available. The original footage was captured using a digital video camera. Next, through use of the EyesWeb open platform for multimedia production and motion analysis (www.eyesweb.org), the four experimental conditions were created:www.eyesweb.org First Frame Middle Frame Last Frame Top-view versus Side-view (McAleer, Mazzarino, Volpe, Camurri, Smith, Paterson, Pollick, 2004) Self-propulsion Rating Results Free Response Results for the Real Video condition and the Body Silhouette condition are not shown – all subjects rated them as being animate and as completely self-propelled. Methods & Results Subjects were asked to give a rating of self-propulsion for a series of 16 displays. Each display was shown from the top-view and the side-view. Only the Block display condition was used. Each subject saw each display 3 times. Side Top (1) Real (4) Block Example (1) Real (4) Block In contrast to the previous experiment, animacy displays have often been shown from the overhead perspective. We investigated if the perception of animacy is affected by the viewpoint of the display.