Manipulation in Human Environments

Slides:



Advertisements
Similar presentations
Feature extraction: Corners
Advertisements

CSE 473/573 Computer Vision and Image Processing (CVIP)
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Bryan Willimon Master’s Thesis Defense Interactive Perception for Cluttered Environments.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
A Study of Approaches for Object Recognition
Overview of Computer Vision CS491E/791E. What is Computer Vision? Deals with the development of the theoretical and algorithmic basis by which useful.
Lecture 4: Feature matching
Example-Based Color Transformation of Image and Video Using Basic Color Categories Youngha Chang Suguru Saito Masayuki Nakajima.
Feature extraction: Corners and blobs
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
Detecting Patterns So far Specific patterns (eyes) Generally useful patterns (edges) Also (new) “Interesting” distinctive patterns ( No specific pattern:
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
 For many years human being has been trying to recreate the complex mechanisms that human body forms & to copy or imitate human systems  As a result.
Presented by Gal Peleg CSCI2950-Z, Brown University February 8, 2010 BY CHARLES C. KEMP, AARON EDSINGER, AND EDUARDO TORRES-JARA (March 2007) 1 IEEE Robotics.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Lecture 06 06/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Soccer Video Analysis EE 368: Spring 2012 Kevin Cheng.
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
Feature extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Object Lesson: Discovering and Learning to Recognize Objects Object Lesson: Discovering and Learning to Recognize Objects – Paul Fitzpatrick – MIT CSAIL.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Local features: detection and description
Features Jan-Michael Frahm.
Wire Detection Version 2 Joshua Candamo Friday, February 29, 2008.
Duo: Towards a Wearable System that Learns about Everyday Objects and Actions Charles C. Kemp MIT CSAIL ● Goal: help machines learn an important form of.
Design of a Compliant and Force Sensing Hand for a Humanoid Robot Aaron Edsinger-Gonzales MIT Computer Science and Artificial Intelligence Laboratory.
Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Paul Fitzpatrick and Artur M. Arsenio CSAIL, MIT.
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Template-Based Manipulation in Unstructured Environments for Supervised Semi-Autonomous Humanoid Robots Alberto Romay, Stefan Kohlbrecher, David C. Conner,
AN ACTIVE VISION APPROACH TO OBJECT SEGMENTATION – Paul Fitzpatrick – MIT CSAIL.
University of Pennsylvania 1 GRASP Control of Multiple Autonomous Robot Systems Vijay Kumar Camillo Taylor Aveek Das Guilherme Pereira John Spletzer GRASP.
Chapter 1 – A Geographer’s World
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
From local motion estimates to global ones - physiology:
Hand Gestures Based Applications
Signal and Image Processing Lab
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
3D Vision Interest Points.
TP12 - Local features: detection and description
Lecture 4: Harris corner detection
Tracking Objects with Dynamics
Interface Design Human Factors.
Domo: Manipulation for Partner Robots Aaron Edsinger MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group
Manipulation in Human Environments
Jörg Stückler, imageMax Schwarz and Sven Behnke*
Developing systems with advanced perception, cognition, and interaction capabilities for learning a robotic assembly in one day Dr. Dimitrios Tzovaras.
Application Solution: 3D Inspection Automation with SA
Learning about Objects
Outline Announcement Texture modeling - continued Some remarks
Mixed Reality Server under Robot Operating System
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
One-shot learning and generation of dexterous grasps for novel objects
Higher School of Economics , Moscow, 2016
Better Vision through Experimental Manipulation
Brief Review of Recognition + Context
KINEMATIC CHAINS & ROBOTS (I)
Domo: Manipulation for Partner Robots Aaron Edsinger MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group
Lecture VI: Corner and Blob Detection
Higher School of Economics , Moscow, 2016
Learning complex visual concepts
AGS A Level 20 marker Feedback sheet
Presentation transcript:

Manipulation in Human Environments Aaron Edsinger & Charlie Kemp Humanoid Robotics Group MIT CSAIL

Domo

Manipulation in Human Environments Work with everyday objects Collaborate with people

Applications Aging in place Cooperative manufacturing Household chores

Three Themes Use Your Body Social Manipulation Task relevant features

Use Your Body Simplify perception (tool tip, hand) Test assumptions (flat surface) Compliance simplifies contact (Placing, Grasping and Transferring)

Structure In Human Environments Sense from above Flat surfaces Objects for human hands Objects for use by humans Look the user in the eye Interpretable body Tall and narrow

Social Complementary action Person can simplify perception and action for the robot Robot can cue the human intuitively (body language) Lot's of examples of tasks where a robot can be helpful without doing everything (robot doesn't have to solve everything to be helpful)

Task Relevant Features What is important? What is irrelevant? *Distinct from object detection/recognition.

Other Examples Donald Norman Circular openings Tips Handles Contact Surfaces

Why are tool tips common? Single, localized interface to the world Physical isolation helps avoid irrelevant contact Helps perception Helps control

Distinct Perceptual Problem Not object recognition How should it be used Distinct methods and features

Generalize What You've Learned Across objects Perceptually map tasks across objects key features -> key features Across manipulators Motor equivalence Manipulator details may be irrelevant

Tool Tip Detection Visual + motor detection method Kinematic Estimate Visual Model

Acquire a Visual Model

Use The Hand's Frame Combine weak evidence Rigidly grasped

RSS 2006 Workshop Manipulation for Human Environments Much to be done!

Summary Importance of Task Relevant Features Example of the tool tip Large set of hand tools Robust detection (visual + motor) Kinematic estimate Visual model

In Progress Perform a variety of tasks Insertion Pouring Brushing

Mean Pixel Error for Automatic and Hand Labeled Tip Detection

Mean Pixel Error for Hand Labeled, Multi-Scale Detector, and Point Detector

Learning from Demonstration

The Detector Responds To Fast Motion Convex

Video from Eye Camera Motion Weighted Edge Map Multi-scale Histogram (Medial-Axis, Hough Transform for Circles) Local Maxima

Defining Characteristics Geometric Isolated Distal Localized Convex Cultural/Design Far from natural grasp location Long distance relative to hand size

Other Task Relevant Features?

Detecting the Tip

Include Scale and Convexity