Download presentation
Presentation is loading. Please wait.
Published byAubrey Cox Modified over 9 years ago
1
Model of the Human Name Stan Emotion Happy Command Watch me Face Location (x,y,z) = (122, 34, 205) Hand Locations (x,y,z) = (85, -10, 175) (x,y,z) = (175, 56, 186)
2
Model of the Human Name Stan Emotion Sad Command Watch me Face Location (x,y,z) = (122, 34, 205) Hand Locations (x,y,z) = (85, -10, 175) (x,y,z) = (175, 56, 186) (x, y,z) Stan
3
Model of the current human: description of the current human Human activity: description of what the user is doing User’s request: the nature of the interaction, the task the user request of the robot Human Agent Internal Model
4
Motivated by desire for natural human-robot interaction Encapsulates what the robot knows about the human Identity Location Intentions Human Agent
5
Detection module Monitoring module Identification module Human Agent Modules
6
Detection Module Allows the robot to detect human presence Uses multiple sensor modalities IR motion sensor array Speech recognition Skin-color segmentation Face detection
7
Monitoring Module Keeps track of the detected human Localization and tracking algorithms Face tracking Finger pointing gesture Basic speech interface
8
Identification Module Under development Attempts to identify detected human based on stored model and current model Voice pattern comparison Name Height Clothing color Detects changes in dynamic model Clothing color Height
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.