-Praktikum- Cognitive Robots Dr. Claus Lenz Robotics and Embedded Systems Department of Informatics Technische Universität München

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

TEACCH Work Tasks Many of these activities were designed primarily to teach the student how to work using a left to right work system and the concept of.
CSE 424 Final Presentation Team Members: Edward Andert Shang Wang Michael Vetrano Thomas Barry Roger Dolan Eric Barber Sponsor: Aviral Shrivastava.
Tic Tac Toe size(600,600); Aim: How can we set up our canvas and display for a Tic Tac Toe game? 1. Sketch the two drawings and write the two code.
The Bioloid Robot Project Presenters: Michael Gouzenfeld Alexey Serafimov Supervisor: Ido Cohen Winter Department of Electrical Engineering.
Controlling Assistive Machines in Paralysis Using Brain Waves and Other Biosignals HCC 741- Dr. Amy Hurst Wajanat Rayes.
Page 1 SIXTH SENSE TECHNOLOGY Presented by: KIRTI AGGARWAL 2K7-MRCE-CS-035.
Senior Computer Engineering Project
 INTRODUCTION  STEPS OF GESTURE RECOGNITION  TRACKING TECHNOLOGIES  SPEECH WITH GESTURE  APPLICATIONS.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
Jennifer Goodall, Nick Webb, Katy DeCorah
Presented by: Doron Brot, Maimon Vanunu, Elia Tzirulnick Supervised by: Johanan Erez, Ina Krinsky, Dror Ouzana Vision & Image Science Laboratory, Department.
Game Development with Kinect
Alarms and Events Processing Group No. 2 Project Guide: Prof. N.D.R.Sarma.
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
ASIMO. Want a robot to cook your dinner, Do your homework, Clean your house, Or get your groceries? Robots already do a lot of the jobs that we humans.
Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we.
Jennifer Goodall, Nick Webb, Katy DeCorah
Lesson Objectives To understand that users with disabilities require different input and output devices To be able to identify these devices and explain.
Matthias Wimmer, Bernd Radig, Michael Beetz Chair for Image Understanding Computer Science Technische Universität München Adaptive.
RAGEEVGANDHI MEMORIAL COLLEGE OF ENGINEERING AND TECHNOLOGY
“S ixth Sense is a wearable gestural interface device that augments the physical world with digital information and lets people use natural hand gestures.
TIPM3 Grades 4-5 November 15, 2011 Dr. Monica Hartman Cathy Melody Gwen Mitchell.
Anees Elhammali Michael Malluck John Parsons Namrata Sopory
Hand Gesture Recognition System for HCI and Sign Language Interfaces Cem Keskin Ayşe Naz Erkan Furkan Kıraç Özge Güler Lale Akarun.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
T-800 Vision by the Terminator Jonathan Russo, Asaf Shamir, Baruch Segal.
Support.ebsco.com EBSCOhost Visual Search Tutorial.
How to Construct a Robot Part 2: Gripper Copyright © Texas Education Agency, All rights reserved. 1.
2 3  A machine  Built to help us  Autonomous (not remote control)  If we want robots to do things for us, we have.
Art 321 Lecture 7 Dr. J. Parker. Programming In order to ‘make things happen’ on a computer, you really have to program it. Programming is not hard and.
Differentiated Instruction in the Primary Mathematics Classroom J. Silva.
Early Behaviours and What to Look For EARLY READING BEHAVIOURS…
E.g.: MS-DOS interface. DIR C: /W /A:D will list all the directories in the root directory of drive C in wide list format. Disadvantage is that commands.
Model of the Human  Name Stan  Emotion Happy  Command Watch me  Face Location (x,y,z) = (122, 34, 205)  Hand Locations (x,y,z) = (85, -10, 175) (x,y,z)
Robo-Dumpster: The Autonomous Garbage Truck Alan Hamlet University of Florida 25 March 2010.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Robotic Chapter 8. Artificial IntelligenceChapter 72 Robotic 1) Robotics is the intelligent connection of perception action. 2) A robotic is anything.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Presented by : P L N GANESH CH DURGA PRASAD M RAVI TEJA 08551A A A0446.
1 Robotic Chapter AI & ESChapter 7 Robotic 2 Robotic 1) Robotics is the intelligent connection of perception action. 2) A robotic is anything.
Bill Sacks SUNFEST ‘01 Advisor: Prof. Ostrowski Using Vision for Collision Avoidance and Recovery.
Vision-Guided Robot Position Control SKYNET Tony BaumgartnerBrock Shepard Jeff Clements Norm Pond Nicholas Vidovich Advisors: Dr. Juliet Hurtig & Dr. J.D.
Accommodating All Children in the Early Childhood Classroom
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
Tweaks Through Time One of the Major tweaks that had to be done to the initial design was the way the robot would find the main door. Initially it will.
SixthSense Technology Visit to download
Data Analysis & Probability Alma JuarezAlicia RamirezCecilia Zaragoza Melissa LopezFabiola Lopez Luz Macias n.
Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.
Tic Tac Toe Game Amna Msalam & Rehan Alashqar Supervised By: Dr. Aladdin Masri & Dr. Emad Natsheh.
SIXTH SENSE TECHNOLOGY
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
Processing the image with Python
Project Overview Introduction to Factory Automation Numerical Control
Lecture 2 Introduction to Programming
iRVision 3D Area Sensor Based Bin Picking
Software engineering USER INTERFACE DESIGN.
Jörg Stückler, imageMax Schwarz and Sven Behnke*
ARTIFICIAL INTELLIGENCE.
Spelling Tic Tac Toe Homework
Spelling Tic Tac Toe Homework
Identifying Human-Object Interaction in Range and Video Data
CS2310 Zihang Huang Project :gesture recognition using Kinect
Sensor Placement Agile Robotics Program Review August 8, 2008
Spreadsheets, Modelling & Databases
Collision Detection.
Jörg Stückler, imageMax Schwarz and Sven Behnke*
Maker Education Manipulator
Presentation transcript:

-Praktikum- Cognitive Robots Dr. Claus Lenz Robotics and Embedded Systems Department of Informatics Technische Universität München

 Object following  Separating / Sorting robot  Tattoo artist / writer robot  Games  Tetris 2

1) Object following  Show object to Cambot  Cambot should follow object  Gripperbot will also follow object based on Cambot data  If reachable, Gripperbot will “bite” and fetch it  Gripperbot puts it to a specific place 3

2) Separating robot  The hand with the camera analyzes the trash and categorizes the trash based on the perceived material  the robot hand sorts the trash based on the categorization  perhaps build some sort of shelf for the trash (because of the robot hand limitations)  Alternative – 1. the camera search and identify the color and location of each object and also the location of the mechanical hand. – 2. the camera send the location to the mechanical hand and the hand grasp corresponding objects and stack them up. – As there are no sensors in the mechanical hand which can detect whether the hand grasp the object correctly, the camera will monitor the process. 4

6) Sorter  The workspace should have some blocks with numbers, colors, and letters, and a kind of shelf,  the user indicates through a microphone or through gesture recognition how the blocks must be sorter, e.g. by color,  and the hand must sort the blocks accordingly. 5

3) Tattoo artist / writer robot  The Kinect camera creates a 3D-model of the object (head, hand, leg, etc.)  The robot hand draws a tattoo on the surface with a marker pen  Alternatives: – While a person writes, the eye has either to follow the movement of the writer, or to recognize the symbols, then the hand must write that symbols. – The idea is that the eye recognizes the Mute-Sign Language symbols and the hand must achieves different tasks like open gripper, close gripper, carry object, sort objects, etc. or write down the words 6

4) Games  Chess  order dice – according to the number on the top surface of the dice. The dice are lying at random places, and the grabber has to pick them and place them in a line, with the number on their top surface being ordered. – The task could be complicated by giving verbal commands (e.g. ascending, descending, only dice with even or uneven number, etc.).  TIC TAC TOE: – Field 3x3; cambot follows game, gripperbot has a pen  4 wins 7

5) Tetris  On an area next to the gripper-arm, tetris-objects are being placed by a human one-by-one.  The robot grips each object and tries to put them on a plank in front of it.  The camera is identifying the type of object and finds a place for it.  Alternatively while the gripper holds the object, a user could give instructions with gestures to the camera on how to rotate the object, which would direct them to the gripper.  (Note: gripper has no rotation  I suggest to rotate the objects by the user, the system will find the optimal placement then) 8

Chosen Application  Sorting 9

Gripper Bot Tasks / Modules I  Task 1 - Communication – Positions of the objects  in world coordinates (coordinate transformation) – Type of objects – “World Model”  Collection of relevant information  update world model  Task 2 – Sorting – Sorting strategy – Graps objects (based on strategy) – Sort them 10

Gripper Bot Tasks / Modules II  Required – Know where to put objects – Decide the order of sorting / Strategy  Order – Moving – Gripping (also change/ adapt design of gripper) – Calibration / Grasping strategy – Error handling / Recovery  Team: – Wang Ke – Fungja Hui 11

Common Things  The same coordinate system for both robots  Moving robots to specific positions  Collisions between robots  start with “good” timing; later parallel robot action might be possible with active collision avoidance  Timing  Coordinating Module  Interface to the whole system  command line / GUI / Speech Recognition/ Visualization (?)  Task: Calibration of the robots in the world  Define the objects: – Start with simple (easy to grasp and recognize) objects – Different colors: RED; GREEN; BLUE; – Different shapes: CUBE; CYLINDER; SPHERE; 12

CamBot Tasks I  Task 1 - Recogniton of objects – “survey the workspace”  moving  search for dead spots – Type of objects (Classification) – Position of objects (Pose estimation)  Task 2 - Check if grasping worked – Move to gripper and “look” if object is in hand 13

CamBot Tasks II  Required: – Camera stream – Coordinates – Calibration of camera & camera to robot (hand-eye calibration) – Models of our objects – Algorithm to make the classification – Error handling / recovery  Team: 14