Presentation is loading. Please wait.

Presentation is loading. Please wait.

Humanoid Robot In Our World Toward humanoid manipulation in human-centred environments Presented By Yu Gu (yg2466)

Similar presentations


Presentation on theme: "Humanoid Robot In Our World Toward humanoid manipulation in human-centred environments Presented By Yu Gu (yg2466)"— Presentation transcript:

1 Humanoid Robot In Our World Toward humanoid manipulation in human-centred environments
Presented By Yu Gu (yg2466)

2 Outline Introduction & Motivation ARMAR III – The robot System
Motion Planning Recognition and Localization Grasping

3 Introduction Current Research Area
Human – Humanoid Interaction / Cooperation Human – centred Environment : Household Skill – Manipulative Perceptive Communicative

4 Motivation Put Humanoid Robots in our environment Let’s coexist!

5 Motion Planning The Robot Control Architecture Recognition Localization Grasp Analysis System

6 ARMAR-III : The Robot Design Consideration:
Mimic Human Sensory & Sensory Motor capabilities Deal with Household Environments Versatile Sensory Motor: Sensorimotor skills involve the process of receiving sensory messages (sensory input) and producing a response (motor output). (mention during pre)

7 ARMAR-III : The Robot Design Specs: Upper Body Light & Modular
Similar Size Similar Proportion Lower Body Holonomic movability

8 ARMAR-III : The Robot Components Head Eye Neck Arms Torso Hand
Presentation ques: Head : 7 DOF + 2 eyes (3 DOF), Each eye = two digital color camera, 1 wide-angle lens + 1 narrow-angle lens Mounted on : 4 DOF neck Acoustic: head + microphone array (6) + inertial sensor Upper Body: 33 DOF, 14 DOF arm 3 DOF Torso Arm: shoulder 3 DOF, Elbow 2 DOF, Wrist 2 DOF – Five fingered hand (8 DOF) Platform: Wheel-based Platform

9 ARMAR-III : The Robot Components Platform Locomotion Omniwheels
Sensor System Laser Scanner Optical Encoder Presentation ques: Head : 7 DOF + 2 eyes (3 DOF), Each eye = two digital color camera, 1 wide-angle lens + 1 narrow-angle lens Mounted on : 4 DOF neck Acoustic: head + microphone array (6) + inertial sensor Upper Body: 33 DOF, 14 DOF arm 3 DOF Torso Arm: shoulder 3 DOF, Elbow 2 DOF, Wrist 2 DOF – Five fingered hand (8 DOF) Platform: Wheel-based

10 ARMAR-III : The Robot

11 Motion Planning The Robot Control Architecture Recognition Localization Grasp Analysis System

12 ARMAR-III Control Architecture
Task Planning Level Synchronization and Coordination Level Sensor-Motor Level

13 ARMAR-III Control Architecture
Tasks need to be decomposed Subtasks are scheduled, executed and synchronized

14 ARMAR-III Control Architecture
Task Planning Level Highest Level Responsible for: Task scheduling Resource\Skill Management Generate Subtasks Task Coordination Level Activates Sequential/Parallel Actions

15 ARMAR-III Control Architecture
Task Execution Level Execute Commands Local Models: Active Model (short-term memory)

16 ARMAR-III Computer Architecture

17 Motion Planning The Robot Control Architecture Recognition Localization Grasp Analysis System

18 ARMAR-III Motion Planning
Demo 1 2 Enlarged Robot Models Ensure Collision-free Paths Using Enlarged Robot Models Result Lazy Collision Checking

19 ARMAR-III Motion Planning
Fast & Adaptive to Changing Environment Previous Methods: not for humanoid Bubbles ? - slow current: RRT Resolution Parameter Also… Ensure Collision-free Paths Show RRT video in case someone does not know what is RRT

20 ARMAR-III Motion Planning
Enlarged Robot Models

21 ARMAR-III Motion Planning
Planning Enlarged Robot Models Lower bond Two step Lazy Collision Checking Normal RRT Sampling Validation Validation using enlarged Detour if segment fails

22 ARMAR-III Motion Planning
Result

23 Motion Planning The Robot Control Architecture Recognition Localization Grasp Analysis System

24 ARMAR-III Recognition & Localization
Based on shape Recognition & localization Based on Texture Global Appearance-based object recognition system Region Processing 6D localization Local Appearance-based object recognition system Feature Extraction Object Recognition 2D & 6D localization P. Azad, T. Asfour, R. Dillmann, Combining appearance-based and model-based methods for real-time object recognition and 6Dlocalization, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2006. S. Nayar, S. Nene, and H. Murase, “Real-time 100 object recognition system,” in International Conference on Robotics and Automation (ICRA), vol. 3, Minneapolis, USA, 1996, pp. 2321–2325.

25 ARMAR-III Recognition & Localization
Based on shape Segmentation : Color segmentation in HSV Limits: uses solid color objects Region Processing Pipeline

26 ARMAR-III Recognition & Localization
Region Processing Pipeline S1: Normalization S2: Gradient Image Normalized Representation in size Robust; Less ambiguous; Good for Various light conditions P. Azad, T. Asfour, R. Dillmann, Combining appearance-based and model-based methods for real-time object recognition and 6Dlocalization, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2006. S. Nayar, S. Nene, and H. Murase, “Real-time 100 object recognition system,” in International Conference on Robotics and Automation (ICRA), vol. 3, Minneapolis, USA, 1996, pp. 2321–2325.

27 ARMAR-III Recognition & Localization
6D localization Orientation Position Calculate Position & Orientation independently Position: Triangulation Centroid Orientation: Retrieve View From DB Acceleration PCA 3D Model -> View P. Azad, T. Asfour, R. Dillmann, Combining appearance-based and model-based methods for real-time object recognition and 6Dlocalization, in: IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 2006. S. Nayar, S. Nene, and H. Murase, “Real-time 100 object recognition system,” in International Conference on Robotics and Automation (ICRA), vol. 3, Minneapolis, USA, 1996, pp. 2321–2325.

28 ARMAR-III Recognition & Localization
Based on Texture Feature Calculation Shi-Tomasi Feature – View Set Maximally Stable Extremal Regions (MSER) combined with LAF SIFT Features - BEST

29 ARMAR-III Recognition & Localization
SIFT descriptor Rotation Invariant Skew&Depth Invariant to some degree Rotational Angle & Feature Vector Gradient based SIFT Feature D.G. Lowe, Object recognition from local scale-invariant features, in:International Conference on Computer Vision, ICCV, Corfu, Greece, 1999, pp. 1150–1517.

30 ARMAR-III Recognition & Localization
General Hough Transform find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in a parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform. Voting – Rotative Information Object Recognition 2D Localization D.G. Lowe, Object recognition from local scale-invariant features, in:International Conference on Computer Vision, ICCV, Corfu, Greece, 1999, pp. 1150–1517.

31 ARMAR-III Recognition & Localization
6D Localization POSIT Algorithm D.G. Lowe, Object recognition from local scale-invariant features, in:International Conference on Computer Vision, ICCV, Corfu, Greece, 1999, pp. 1150–1517.

32 ARMAR-III Recognition & Localization
6D Localization Limitation Correctness depends on 2D correspondence only Depth infor – sensitive to error in 2D coord Approach for cuboids Determine Highly textured points Determine Correspondence Determine Highly textured points Fit a 3D plane Calculate 3D points D. DeMenthon, L. Davis, D. Oberkampf, Iterative pose estimation using coplanar points, in: International Conference on Computer Vision and Pattern Recognition, CVPR, 1993, pp. 626–627.

33 ARMAR-III Recognition & Localization
6D Localization Approach for cuboids Determine Highly textured points Calculate 3D points Determine Correspondence Determine Highly textured points Fit a 3D plane D. DeMenthon, L. Davis, D. Oberkampf, Iterative pose estimation using coplanar points, in: International Conference on Computer Vision and Pattern Recognition, CVPR, 1993, pp. 626–627.

34 Motion Planning The Robot Control Architecture Recognition Localization Grasp Analysis System

35 ARMAR-III Grasping Demo 1 2 3 4 5 Central Idea: DB for 3D models
Components: Global Model database Offline Grasp Analyzer Online Visual Procedure to identify objects in stereo images Select Grasp – Simulation Using GraspIt!

36 ARMAR-III Grasping Components: Global Model database – Model + Grasp
Offline Grasp Analyzer – Computer Stable Grasp Online Visual Procedure to identify objects in stereo images Match Model (Recognition) & Find loc + pose Select Grasp

37 ARMAR-III Grasping Offline Grasp Analysis Limitation:
Inaccuracy & Uncertainty of the object: Location Composition of a Grasp Grasp Type Grasp Starting Point (GSP) Approaching Direction Hand Orientation

38 ARMAR-III Grasping Grasp Centre Point (GCP) Virtual Point
Align with GSP Practical Application Store Grasp in order

39 Thoughts On Paper Pros: Cons:
Provided integrated humanoid platform and system for further development Not only for Research but also for real applications Has Vision System + Path Planner + Offline Grasp Analyser Cons: For recognition: Segmentation is not robust across diverse color range Pre-Stored Model + Grasp

40 Thoughts On Paper What can be done ? Train of Thought
On-board Powerful compute engine Use Deep learning to adapt to novel objects Train of Thought Why not use assumption ?

41 Thank You For Listening!


Download ppt "Humanoid Robot In Our World Toward humanoid manipulation in human-centred environments Presented By Yu Gu (yg2466)"

Similar presentations


Ads by Google