Presentation is loading. Please wait.

Presentation is loading. Please wait.

Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.

Similar presentations


Presentation on theme: "Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann."— Presentation transcript:

1 Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann Presentation by Yixing Chen

2 OUTLINE Introduction The humanoid robot ARMAR-III Robot control architecture Collision-free motion planning Object recognition and localization Programing of grasping and manipulation tasks

3 Introduction Why do we create the humanoid robot? What’s the requirements for humanoid robot in human-centered environment? How can humanoid robot work in human- centered environment?

4 The humanoid robot ARMAR-III Video: https://www.youtube.com/watch?v=SHMSyYLRQPM https://www.youtube.com/watch?v=SHMSyYLRQPM

5 Ability: Deal with the household environment Deal with a wide variety of objects Deal with the different activities Configuration: Head – eyes, vision system Upper body – arm and hand, help to grasp Mobile platform – maintain stability, move

6 Seven subsystem: head, left arm, right arm, left hand, right hand, torso and a mobile platform.

7 All configuration should be similar to human body.

8 Robot control architecture The task planning level Specify the subtasks for the multiple subsystems of the robot, this level represents the highest level with functions of task representation and is responsible for the scheduling of tasks and management of resources and skills. The task coordination level Activates sequential/parallel actions for the execution level in order to achieve the given task goal. The task execution level Characterized by control theory to execute specified sensory-motor control commands.

9

10 Collision-free motion planning Multiresolutional planning system Low resolution – path planning algorithm for mobile platform or rough hand movement. Faster. High resolution – path planning algorithm for complex hand movement, such as dexterous manipulation and grasping. Slower. Guarantee collision-free path Using rapidly exploring tree (RRT). Using enlarged robot models The enlarged models are constructed by slightly scaling up the convex 3D models of the robot so that the minimum distance between the surfaces of the original and the enlarged model reaches a lower bounding d freespace. So we can apply this method without any distance computation and speed up the algorithm.

11 Collision-free motion planning Lazy collision checking Robot is very complex in shape. so even though we find the collision-free path and path points, the path segment between the points may result in collision. So we use the lazy collision checking to decouple the collision checks for C-space samples and path-segments, this method can speed up the process of finding path. First step: the normal sampling-based RRT algorithm searches a solution path in the C-space Second step: we use the enlarged model approach to check the collision status of the path segments of the solution path. If a path segment between two configurations ci and ci+1 fails during the collision test, we try to create a local detour by starting a subplanner which searches a way around the Cspace obstacle

12 Result

13 Object recognition and localization To work in a household environment, the robot must be able to recognize the objects and localized them with a high enough accuracy for grasping. In this part, we will introduce the recognition and localization based on shape and on texture. recognition and localization based on shape: The objects are colored objects. Simplify the problem of segmentation, let robot concentrate on complicated tasks such as the filling and empty of the dishwasher. Combine the appearance-based method, model-based method and stereo vision. recognition and localization based on texture The objects are textured objects such as box of food, more complex in recognition.

14 recognition and localization based on shape Segmentation Perform color segmentation in HSV color space. Use stereo vison, the properties of result blob are represented by bounding box, centroid and number of pixels. Region processing pipeline Use Principle Component Analysis, normalized the region in size. Resize the region to a squared window of 64*64 pixels. 6D localization Six dimensional space – varying position and orientation. Calculate the position and orientation independently: -Estimate of position is calculated by triangulating the centroid of the color blob. -Estimate of orientation is retrieved from the database for the matched view.

15 recognition and localization based on shape Typical result of a scene analysis. Input image of the left camera (left) and 3D visualization of the recognition and localization result (right).

16 recognition and localization based on texture Feature calculation -Different views of the same image patch around a feature point vary, so we cannot simply correlate the image. -The descriptors are computed on the base of a local planar assumption. -SIFT (Scale-invariant feature transform) is the best way to find the feature of images. -The feature information is about the position(u, v), rotation angle φ, and feature vector{x j }. 2D localization Get the correspondences between the view and training image.

17 recognition and localization based on texture Correspondences between view of the scene and training image. The left picture is the view, and the right picture is the training image. The blue box illustrates the result of 2D localization.

18 recognition and localization based on texture 6D localization Calculate the pose base on the correspondences between 3D model coordinates and image coordinates. And to improve the accuracy, make use of the calibrated stereo system to calculate depth information with maximum accuracy. -Determine highly textured points with the calculated 2D contour of the object in the left camera image -Determine correspondences with subpixel-accuracy in the right camera image -Calculate a 3D point for each correspondence -Fit a 3D plane into the calculated 3D point cloud -Calculate the intersections of the four 3D lines through the 3D corners in the left image with the 3D plane.

19 recognition and localization based on texture Recognition and localization for boxes and cups. The left picture is the view (input image), and the right picture is the 3D visualization pf the result.

20 Programming of Grasping and manipulation work The central idea: The existence of a database with 3D models of all the objects encountered in the robot workspace and a 3D model of the robot hand. Integrated grasp planning system -The global model database: Contains the CAD models of all objects and a set of feasible grasps for each object -The offline grasp analyser: use the models of objects and hand to compute a set of stable grasps. -A online visual procedure to identify objects in stereo images: match the features of images with the prebuilt model of objects, then determines the location and pose. After localizing the object, the robot select the grasp type for the object from the set of stable grasps.

21 Integrated grasp planning system

22 Offline grasp analysis To ensure the accuracy of the grasp, in our approach, given an object, the grasp will be described by these features. Grasp type Grasp starting point (GSP) Grasp center point (GCP) Approaching direction Hand orientation

23 Offline grasp analysis grasp type It determines the grasp execution control such as the hand preshape posture, the control strategy, which fingers are used in the grasp and so on.

24 Grasp video Let’s watch a grasp video. https://www.youtube.com/ watch?v=QYEJJA52wG8 And the whole work process video. https://www.youtube.com/ watch?v=87cbivmjfe8

25 Summary This paper introduced: A humanoid robot consisting of an active head for vision system, two arms with five-fingered hands, a torso and a holonomic platform. An integrated system of grasping and manipulation tasks in humanoid robot. The system incorporates a vision system for the recognition and localization of objects, a path planner for the generation of collision- free trajectories and an offline grasp analyser that provides the most feasible grasp configuration for each object.

26 Thank You!


Download ppt "Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann."

Similar presentations


Ads by Google