Presentation is loading. Please wait.

Presentation is loading. Please wait.

Affective Computing Quantifying and Augmenting People’s Experiences Hyungil Ahn, Shani B Daily, Rana el Kaliouby, Micah Eckhardt MIT Media Laboratory.

Similar presentations


Presentation on theme: "Affective Computing Quantifying and Augmenting People’s Experiences Hyungil Ahn, Shani B Daily, Rana el Kaliouby, Micah Eckhardt MIT Media Laboratory."— Presentation transcript:

1 Affective Computing Quantifying and Augmenting People’s Experiences Hyungil Ahn, Shani B Daily, Rana el Kaliouby, Micah Eckhardt MIT Media Laboratory

2 Outline of Workshop Roundtable introductions The challenge of measuring customer experiences Affective sensors and technologies + Demo Consumer decision-making Affect as index

3 Evaluation method used today: Cleanse palate (crackers, water) Consume at least half of the beverage (food) Fill out questionnaire about liking/buying Other method: Focus Groups Problem: These do not predict marketplace buying Challenge: Construct an evaluation method and a computational model to accurately predict consumer preferences and marketplace buying behavior. Measuring Beverage Taste Preference (with Pepsi) VS.

4 Self-Report Most attempts to date that try to predict marketplace decisions are based on studies where people are asked what they would do Self-report captures cognitive elements of liking or wanting (what you think you should say you like or want) but may or may not capture the actual feelings of liking or wanting. Self-reported liking can be rated instantly (after any sip), but obtaining an accurate value for wanting or motivation may require a longer experience.

5 Why Technology to Tag Experiences? 1.Measuring in situ Experiences: MIT Media Lab has over 100 industry sponsors -> unique opportunity to develop and test with real-world applications Advertising Customer delight Customer preferences Learning Medical training - empathy Cognitive load measurement

6 Why Technology to Tag Experiences? 1.Measuring Experiences: 1.Advertising, marketing, usability, Learning, Training, Customer relationship management 2.Augment Communication 1.Advance social-emotional intelligence in machines to improve people’s experience with technology 2.Enhance people’s ability to connect with others (autism spectrum disorders)

7 Physiology Sensing

8 Traditional SC sensing off the fingers, wired to a box Current version: wirelessly communicating SC off the wrist, Media Lab Galvactivator LED on the hand reflects SC (1999) Skin Conductance (SC) Sensors

9 Mindreader Platform

10 Affective + Cognitive States Concentrating Disagreeing Interested Thinking Unsure Absorbed Concentrating Vigilant Disapproving Discouraging Disinclined Asking Curious Impressed Interested Brooding Choosing Thinking Thoughtful Baffled Confused Undecided Unsure Agreeing Assertive Committed Persuaded Sure Happy Delighted Enjoying Content Automated Facial Analysis

11 A Smile for Every Emotion!!

12 Feature point tracking (Nevenvision) Head pose estimation Facial feature extraction Head & facial action unit recognition Head & facial display recognition Mental state inference Hmm … Let me think about this think Mindreader Platform

13 Demo

14 Generalization Level 1 Train and test on Mind Reading DVD

15 Generalization Level 2 Train>Mind Reading DVD; Test>posed corpus A challenge with machine learning + with interventions for autism 96 videos from CVPR 2004 tested on our system and on 18 people Accuracy: 80% for best 11% of videos Agreeing Disagreeing Confused Concentrating Thinking Interested

16 Generalization Level 2 Train>Mind Reading DVD; Test>posed corpus 96 videos from CVPR 2004 tested on our system and on 18 people Accuracy: 80% for best 11% of videos Accuracy of panel of 18 people Average = 54.5% Accuracy of computer Average = 63.5% (better than 17 of 18 panelists)

17 MindReaderAPI Wrappers SDK (for developers)‏ Application (for non-developers)‏ Sample apps Tracker OpenCV nPlot External Libs/APIs MindreaderPlatfor m Downloadables Mindreader Platform Components

18 Features of the platform Facial information accessible at multiple levels (action units, expressions and head gesture, affective and/or cognitive states) -Interface for training of new and existing mental states Sensor support (skin conductance, motion, temperature) Socket communication Wrappers in python, java LINUX (ubunto) version coming soon Porting to camera phone, xo

19 To use the platform Accessible to sponsors for download (the stand- alone application or SDK for developers) Rana el Kaliouby,

20 Consumer Decision -making

21 Consumer Decision-Making Behavioral Measure (Number of Choices, Amount Consumed, etc.) Cognitive Measure (Self-Reports, Focus Groups, etc.) Affective Measure (Facial Valence, Skin Conductance, etc.) Decision-Making Computational Model

22 1.Experienced utility: affective or hedonic experience, which can occur in a moment-based measure or memory-based measure. 2.Decision utility is inferred from observed behavioral choices. 3.Predicted utility is a belief about future experienced utility. 4.Moment utility is a measure of current affective or hedonic experience. 5.Total utility is derived from statistically aggregating a series of moment utilities. 6.Remembered utility is a single memory-based measure of affective or hedonic experience, which is based on retrospective assessments of episodes or periods of life.

23 Wanting vs. Liking Rather, multiple valuation systems such as cognitive and affective processing systematically influence human decision-making. Also, the neural substrates of liking (pleasure) are separate from those of wanting (motivation) in the human brain, so there is evidence from neuroscience that supports treating these concepts differently when modeling how people make decisions. Our model will separate these.

24 Experienced Utility The memory-based approach accepts a person’s retrospective evaluations of past episodes and situations as valid data. The remembered utility of an episode of experience is defined by retrospective global assessment of it – the self-reported recollection of liking, e.g., “how much did you like the drink?” (ii) The moment-based approach derives the experienced utility of an episode, e.g., a series of sips, from real-time measures of the pleasure and pain that the participant experienced during that episode. Moment utility refers to the valence (good or bad) and to the intensity (mild to extreme) of current affective or hedonic experience, e.g. the experience of the current sip. Kahneman’s total utility of an episode is derived exclusively and statistically from the record of moment utilities during that episode. We measure moment utility by measuring emotional expressions elicited by the episode in real-time. People sometimes also attempt to forecast the affective or hedonic experience – the experienced utility – that is associated with various life circumstances. These are called predicted utility or affective forecasting.

25 Experiment Setup

26 Machine Selection Sip on Resulted Beverage Answer Questions (each trial)

27

28 Affective Measure (Facial Valence, Skin Conductance, etc.) AnticipationDisappointment - Satisfaction Liking / Disliking 25 consumers, 30 trials, 30 min. videos!

29 Behavioral Measure (Number of Choices, Amount Consumed, etc.) Choosing a Vending Machine on the Computer Screen

30 Behavioral Measure (Number of Choices, Amount Consumed, etc.) Beverage Outcome and Sipping

31 Cognitive Measure (Self-Reports, Focus Groups, etc.)? Asking Self-Reported Beverage Liking (asked every trial)

32 Cognitive Measure (Self-Reports, Focus Groups, etc.)? Asking Machine Liking (asked every 5 trials)

33 Asking Expectation Comparison and Purchase Intent (asked every 5 trials)

34 Participants MenWomen Total participants1722 Video analysis1519 Facially expressive710

35 Analysis overview Cognitive - Self-Report on Questionnaire: –Cognitive wanting –Cognitive liking Behavioral –Consumption –Machine selection Affective –valence (positive or negative) from the facial expression Affective outcome valence (satisfaction/disappointment, associated with affective wanting value) Affective evaluation valence (liking/disliking, associated with affective liking value) –arousal (calm or excited) from the skin conductance data Affective anticipatory arousal (associated with risk and uncertainty, possibly valenced feelings of hope or dread) Affective outcome arousal (associated with affective wanting value) Affective evaluation arousal (associated with affective liking value)

36 Cognitive Analysis Vanilla vs. SummerMix Three different ways of looking questionnaire data: –“Ultimate” after the 30 trials: 20:18 –20:17 –18:16. People were fairly evenly split in preferring Vanilla or SummerMix, with only a slight preference for Vanilla

37 Behavioral Analysis Consumption: –an average of 15 sips of each beverage –slightly more Vanilla (6.7 oz) than SummerMix (6.1 oz) –Vanilla favorers reported lower purchase intent than those who preferred SummerMix –suggests that the SummerMix favorers, while a slightly smaller group, were even more likely to buy the soda. Machine Selection: –a subtle bias (54.6%) toward the SummerMix machine

38 Cognitive + Behavioral Anslysis SummerMix should have had nearly as good of a chance to succeed in the marketplace as Vanilla had. Going to market looks like a reasonable decision. But this turns out to be only part of the story.

39 Physiology Sensing

40 Traditional SC sensing off the fingers, wired to a box Current version: wirelessly communicating SC off the wrist, Media Lab Galvactivator LED on the hand reflects SC (1999) Skin Conductance (SC) Sensors

41 Patriots Touchdown Doritos Mouse Patriots 1st Down End Zone Overthrown Leading up to Patriots Touchdown Super Bowl XLII

42 Magician correctly identifies search term Sponsor Week Event: Randi vs. Raphael with live audience feedback

43

44 Facial Analysis

45 Satisfaction Disappointment Liking Disliking

46 (Obtained Outcome ≠ What She Wanted) Satisfaction Disappointment Liking Disliking

47

48 (Obtained Outcome = What She Wanted) Satisfaction Disappointment Liking Disliking

49

50 (Sipped Soda = What He Disliked) Satisfaction Disappointment Liking Disliking

51

52 (Sipped Soda = What He Liked) Satisfaction Disappointment Liking Disliking

53 Facial Valence Analysis

54 Asymmetry Vanilla favorers showed absolutely no positive expressions while tasting SummerMix, while nearly half of the SummerMix favorers showed something positive while tasting Vanilla. The complete lack of any positive expressions in the Vanilla group may be a red flag. If positive FV’s were mapped into purchasing behavior, then one might expect slightly less than half the SummerMix favorers to buy both products, while no Vanilla favorers would buy SummerMix.

55

56 Automated Facial Analysis Tracker Accuracy Average = 77.4%; highest = 96.87%; lowest = 23.98%

57 Automated Facial Analysis Sip Detection

58 700 Sips, 82% of the sips detected

59 Automated Facial Analysis Real-time Analysis

60 Automated Facial Analysis Speedup over manual coding 3 minutes on average for each minute of video. At least 2 or 3 coders are needed to establish validity of the coding, resulting in 6 to 9 minutes of coding per minute of video A very labor-intensive and time-consuming approach Each participant has about 30 minutes of video, so we can expect two coders to take over 180 minutes (3 hours) to code the 30 minutes of data.

61 Automated Facial Analysis Speedup over manual coding With our automated sip detection algorithm, the algorithm can fast forward to sip events and can be made to look at outcome segments, the only two places where we coded for FV’s in the analysis above. If positioned to the right spots in the video, the coder only has to look at about 20 seconds of video per trial. Over 30 trials per participant, this is 600 seconds, or 10 minutes of video to code, instead of 30. At 3 minutes of coding time per minute of video, plus occasional breaks, this results in ~30-40 minutes for each coder to code each participant’s experience, or ~60-80 minutes if we use two coders. Combining our sip detection algorithm with human coding would cut the total manual coding time required (man hours) for sip events down by at least a factor of three.

62 Conclusion – Next steps

63 Affect as Index

64 Multi-person aggregate X People are watching same video, or doing same task (+physiology) – can we aggregate this data in real-time? Advertising and TV/Cable programming Web applications

65 Patriots Touchdown Doritos Mouse Patriots 1st Down End Zone Overthrown Leading up to Patriots Touchdown Super Bowl XLII

66 Magician correctly identifies search term Sponsor Week Event: Randi vs. Raphael with live audience feedback

67 Shani to add more slides here?

68 Research Roadmap for Quantifying Experiences

69 Two-person interaction Measuring: Mirroring Synchrony Initiation For service interactions (e.g. Bank of America) For autism (e.g., monitoring real-time social interactions, parent-child interactions – Baby Siblings project (UK), Playlamp

70 Mobile Phone Application Port API to Google’s Android platform (Java) Develop some applications to work on mobiles HTC Android phones coming out Dec 08

71 Mobile Phone Application Port API to Google’s Android platform (Java) Develop some applications to work on mobiles HTC Android phones coming out Dec 08

72 Sociable Robots

73 Acknowledgements MIT Media Lab Affective Computing Group Computer Science, American University in Cairo Abdel Rahman Nasser, Youssef Abdallah. Mina Mikhail, Tarek Hefni Sponsors: National Science Foundation Nancy Lurie Marks Family Foundation MIT Media Lab TTT Consortium Seagate, Google, Robeez


Download ppt "Affective Computing Quantifying and Augmenting People’s Experiences Hyungil Ahn, Shani B Daily, Rana el Kaliouby, Micah Eckhardt MIT Media Laboratory."

Similar presentations


Ads by Google