Conclusions This method for the production of animacy displays is viable. Displays filmed from the TopView are more often rated as being self- propelled.

Slides:



Advertisements
Similar presentations
Perceiving Animacy and Arousal in Transformed Displays of Human Interaction 1 Phil McAleer, 2 Barbara Mazzarino, 2 Gualtiero Volpe, 2 Antonio Camurri,
Advertisements

Extracting cues to intention utilising animacy displays derived from human activity Phil McAleer & Frank E. Pollick Department of Psychology, University.
Methods 9 scenarios were used in total: 5 scenarios invovled interactions between 2 people, such as dancing, chasing, following and circling. 3 scenarios.
Intention Recognition in Autistic Spectrum Condition (ASC) using Video Recordings and their Corresponding Animacy Display. Phil McAleer 1, Lawrie McKay.
Cross-modal perception of motion- based visual-haptic stimuli Ian Oakley & Sile OModhrain Palpable Machines Research Group
Criminological Psychology Charlton et al 2000: St. Helena study.
Role of communication experience in facial expression coding in preschool children Vera Labunskaya a a Academy of psychology and pedagogy of Southern Federal.
A comparison of black and white film and digital photography through grain and noise Chris Betts Mentored by Mr. Norman Tracy Introduction Recently, digital.
1.7 What if it is Rotated? Pg. 26 Rigid Transformations: Rotations.
Prior observation or production of a motor action improves the perception of biological motion: Evidence for a gender effect Christel Ildéi-Bidet 1, Alan.
Procedure Baseline participants completed the category fluency task without seeing the video clip. Results Visual Acuity Young adults had better visual.
5-Month-Old Infants Match Facial and Vocal Expressions of Other Infants Mariana Vaillant and Lorraine E. Bahrick Florida International University Abstract.
Generation of Virtual Image from Multiple View Point Image Database Haruki Kawanaka, Nobuaki Sado and Yuji Iwahori Nagoya Institute of Technology, Japan.
SPECULAR FLOW AND THE PERCEPTION OF SURFACE REFLECTANCE Stefan Roth * Fulvio Domini † Michael J. Black * * Computer Science † Cognitive and Linguistic.
A Study of Approaches for Object Recognition
Inhibition as a predictor of performance on an Old/New recognition memory task Chase Kluemper 1, Seth Kiser 1, Yang Jiang 1, Jane E. Joseph 2, & Thomas.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Motion and Ambiguity Russ DuBois. Ambiguity = the possibility to interpret a stimulus in two or more ways Q: Can motion play a part in our interpretation.
Introduction to Image Quality Assessment
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
CS248 Midterm Review. CS248 Midterm Mon, November 3, 7-9 pm, Gates B01 Mostly “short answer” questions – Keep your answers short and sweet! Covers lectures.
BPC: Art and Computation – Spring 2007 Overview of Spring Semester Tools and Technologies Glenn Bresnahan
CS248 Midterm Review Michael Green and Sean Walker (based on the work of previous TAs)
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
The Experimental Approach September 15, 2009Introduction to Cognitive Science Lecture 3: The Experimental Approach.
Video Mining Learning Patterns of Behaviour via an Intelligent Image Analysis System.
Precursors to theory of mind? Deciding whether something is animate or inanimate Potential Cues to animacy –Action at a distance –Self-propelled –Biological.
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
Using the Web for Bilingual/Bicultural Education of Deaf Children Sonia Martinez, Vicki Hanson & Susan Crayne IBM T. J. Watson Research Center New York,
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
3D COMPUTER GRAPHICS IMD Chapter 1: 3D Computer Graphics Chapter 1: 1 Lecturer: Norhayati Mohd Amin.
The Exposure Triangle Exposure is the combination of three factors. The Aperture size, Shutter speed and ISO setting. The combination of these three elements.
1 Chapter 1 Observation Skills CATALYST (LEFT HAND SIDE) Take out your three questions from last night’s reading! When you walked onto the second floor.
1 Perception and VR MONT 104S, Fall 2008 Session 13 Visual Attention.
Making Distance Judgements in Real and Virtual Environments: Does Order Make a Difference? Introduction Virtual environments are gaining widespread acceptance.
Does the Quality of the Computer Graphics Matter When Judging Distances in Visually Immersive Environments? Authors: Thompson, Creem-Regehr, et al. Presenter:
Full-body motion analysis for animating expressive, socially-attuned agents Elisabetta Bevacqua Paris8 Ginevra Castellano DIST Maurizio Mancini Paris8.
>>0 >>1 >> 2 >> 3 >> 4 >> 15 Film and Cinema By: Prof.Bautista Chapter 15.
The effect of varying the visual context on trajectory planning and perceptual awareness of one’s own performance 1 Psychology and Neurocognition Laboratory.
Suppression Task. We used a task modeled on Gernsbacher et. al. (1991) Lewis and colleagues 3,4 measured the monocular visual field extent of 3-, 4-, and.
Chapter 8: Perceiving Motion
COMPOSITING USING BLUE AND GREEN SCREENS   Background filmed or.
1 Introduction 1. The forensic examiner must be able to find—identify the evidence. 2. The forensic examiner must be able to document—record the evidence.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Reliability of one cognitive process
The Perception of Visual Walking Speed While Moving Frank Durgin, Krista Gigone, Rebecca Scott Swarthmore College In Press: Journal of Experimental Psychology:
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
3D reconstruction from uncalibrated images
The Role of Mixed Emotional States in Predicting Men’s and Women’s Subjective and Physiological Sexual Responses to Erotic Stimuli Peterson, Z. D. 1 and.
LOGO Change blindness in the absence of a visual disruption Professor: Liu Student: Ruby.
By James J. Todd and Victor J. Perotti Presented by Samuel Crisanto THE VISUAL PERCEPTION OF SURFACE ORIENTATION FROM OPTICAL MOTION.
In both active and passive groups, the correlation between spatial ability and performance was attenuated, relative to previous studies In contrast to.
Motion Graphics Tony Johnson - Eccleston. Music Video In this video i am talking about all the motion graphic.
Corresponding author: Ruth Raymaekers, Ghent University, Department of Experimental-Clinical and Health Psychology, Research Group Developmental Disorders;
Jean Piaget’s Stages and Problem Solving Catherine Chacon TIE 512.
Social Psychology The Self. How is Social Psychology Different From Philosophy/Psychology? Defining Characteristic: Scientific Method –Refers to a set.
PERCEPTION & MAP DESIGN Ntshate Athenkosi Gregory Crichton
Selfish or Selfless? On the signal value of emotion in altruistic behavior Deborah Small Wharton, University of Pennsylvania In collaboration with Alix.
Unit 4: Perceptual Organization and Interpretation
SPECULAR FLOW AND THE PERCEPTION OF SURFACE REFLECTANCE
How Patients Discuss Their Risk: Words and Numbers
59 54th Annual Meeting of the Society for Psychophysiological Research, September 10-14, 2014 Atlanta, Georgia Event-related response in skin conductance.
Editing and Camera Terms
Video System TTFs Part (I): Basic Design Strategy.
12 Inferential Analysis.
Categorizing sex and identity from the biological motion of faces
Backward Masking and Unmasking Across Saccadic Eye Movements
Dense Regions of View-Invariant Features Promote Object Recognition
12 Inferential Analysis.
Volume 45, Issue 4, Pages (February 2005)
Presentation transcript:

Conclusions This method for the production of animacy displays is viable. Displays filmed from the TopView are more often rated as being self- propelled compared to those filmed from the SideView. Observers were able to reliably match animacy displays to given scenarios, though the ability to do so would appear to be affected by Viewpoint. Towards Canonical Views of Animacy from Scenes of Human Action 1 Phil McAleer, 2 Barbara Mazzarino, 1 Helena Paterson, 1 Frank E. Pollick 1 Department of Psychology, University of Glasgow, Glasgow, Scotland, 2 Infomus Lab, D.I.S.T, University of Genova, Genova, Italy It is well known that social intention and meaning can be attributed to displays of moving geometric shapes, yet the cognitive processes that underlie this perception of animacy are still open to debate. Heider and Simmel (1944) showed that people, on viewing a simple animation involving geometric shapes (a disc, a large triangle and a small triangle), would attribute emotions and intentions to the shapes based on their movements. Further experiments have used the technique of varying simple mathematical relationships of the motion of the shapes to create new displays. Our previous research introduced a new method to create animacy displays directly from human motion, (McAleer et al. 2004). We continue to use the method to examine the perception of animacy and in doing so we investigate the question, whether the perception of animacy is viewpoint dependent. Introduction Actors were filmed performing various movements and interactions on a 10ft square stage. The original footage was captured using two digital video cameras: one positioned directly above center stage, TopView; the second positioned inline with the center of the side of the stage, SideView. The X and Y co-ordinates were then extracted from the footage using the EyesWeb open platform for multimedia production and motion analysis, which tracks the centre of mass of the silhouette image for each respective person. ( These co-ordinates were filtered to reduce noise and used to create QuickTime movies depicting white disk(s) on a black background. Stimulus Production TopView Versus SideView Methods 16 scenarios were created: 13 scenarios involved interactions between 2 people, such as chasing, following, walking, and playing. 3 scenarios showed just one person walking or jogging. 10 subjects ran 3 blocks – 1 block shows all 16 scenarios from both Viewpoints. Subjects were asked for a rating of self propulsion for each display, on a scale of 1 to 9: 1 – the disks in the display were not self-propelled. 9 – the disks in the display were self-propelled. Top Real Disks Side Results There was a significant effect of Viewpoint. F(1,9) = 9.276, p<0.05 Displays filmed from a TopView (mean = 5.563) were more often rated as being self-propelled than those filmed from a SideView (mean = 4.438). Blythe, Todd & Miller (1999) provide evidence for 6 basic scenarios in human life that are at the root of all animacy displays: chasing, courting, fighting, following, guarding, and playing. We tested subjects to see if they could determine which scenario was which and also, if these scenarios were more salient depending on the viewpoint they were filmed from. Basic Human Scenarios Methods 2 actors performed 5 examples of each scenario. 1 example of each scenario was selected and both viewpoints of the same scenario were used. Subjects were shown each display and using a 6AFC (fight, guard, follow, chase, play & flirt), asked to chose the verb that best described the action. Results Ability to correctly differentiate the scenarios was above chance level for all scenarios at both Viewpoints. For Fight, Play and Flirt the ability would appear to be independent of viewpoint. Guard, Chase and Follow were better recognised from the TopView. References Blythe, P.W., Todd, P.M., and Miller, G.F. (1999). Judging intention from motion: Basic mechanisms for social rationality. In G. Gigerenzer, P.M. Todd, and the ABC Research Group, Simple heuristics that make us smart. New York: Oxford University Press. Heider, F., & Simmel, M., An Experimental Study of Apparent Behavior, American Journal of Psychology, 57:2, (1944). McAleer, P., Mazzarino, B., Volpe, G., Camurri, A., Patterson, H., & Pollick, F. (2004). Perceiving Animacy and Arousal in Transformed Displays of Human Interaction. Journal of Vision, 4(8), 230a, helena: