Bryan Willimon IROS 2011 San Francisco, California Model for Unfolding Laundry using Interactive Perception.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Chapter 5: Space and Form Form & Pattern Perception: Humans are second to none in processing visual form and pattern information. Our ability to see patterns.
Presentation of Illusory Haptic Shape via Contact Location Trajectory on the Fingerpad Hanifa Dostmohamed and Vincent Hayward Haptics Laboratory, Center.
QR Code Recognition Based On Image Processing
Point A Point B Point C Point D Point E
Class Activity Paper Bag Drawing. Paper Bag Drawing This drawing emphasizes line quality and introduces the use of value (all nine) in the illustration.
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Bryan Willimon, Steven Hickson, Ian Walker, and Stan Birchfield IROS 2012 Vila Moura, Algarve An Energy Minimization Approach to 3D Non- Rigid Deformable.
IntroductionIntroduction AbstractAbstract AUTOMATIC LICENSE PLATE LOCATION AND RECOGNITION ALGORITHM FOR COLOR IMAGES Kerem Ozkan, Mustafa C. Demir, Buket.
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Bryan Willimon Master’s Thesis Defense Interactive Perception for Cluttered Environments.
CONTOUR LINES.
As we go through the PowerPoint, please do the following:  Match your cards.  Take notes in your science notebook.  Label your map. topographyThe study…
Lecture 4 Edge Detection
EE 7730 Image Segmentation.
© 2004 by Davi GeigerComputer Vision January 2004 L1.1 Image Measurements and Detection.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
January 19, 2006Computer Vision © 2006 Davi GeigerLecture 1.1 Image Measurements and Detection Davi Geiger
1 Abstract This paper presents a novel modification to the classical Competitive Learning (CL) by adding a dynamic branching mechanism to neural networks.
Objective of Computer Vision
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Image processing Lecture 4.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Xiaojiang Ling CSE 668, Animate Vision Principles for 3D Image Sequences CSE Department, SUNY Buffalo
UNDERSTANDING DYNAMIC BEHAVIOR OF EMBRYONIC STEM CELL MITOSIS Shubham Debnath 1, Bir Bhanu 2 Embryonic stem cells are derived from the inner cell mass.
CS 6825: Binary Image Processing – binary blob metrics
Implementing Codesign in Xilinx Virtex II Pro Betim Çiço, Hergys Rexha Department of Informatics Engineering Faculty of Information Technologies Polytechnic.
Line detection Assume there is a binary image, we use F(ά,X)=0 as the parametric equation of a curve with a vector of parameters ά=[α 1, …, α m ] and X=[x.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Introduction Image geometry studies rotation, translation, scaling, distortion, etc. Image topology studies, e.g., (i) the number of occurrences.
Bryan Willimon, Steven Hickson, Ian Walker, and Stan Birchfield Clemson University IROS Vilamoura, Portugal An Energy Minimization Approach to 3D.
Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010 Rigid and Non-Rigid Classification.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Bringing Clothing into Desired Configurations with Limited Perception Joseph Xu.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Classification of Clothing using Interactive Perception BRYAN WILLIMON, STAN BIRCHFIELD AND IAN WALKER CLEMSON UNIVERSITY CLEMSON, SC USA ABSTRACT ISOLATION.
Non-Photorealistic Rendering and Content- Based Image Retrieval Yuan-Hao Lai Pacific Graphics (2003)
Computational Biology, Part 22 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Distinctive Image Features from Scale-Invariant Keypoints David Lowe Presented by Tony X. Han March 11, 2008.
SAR-ATR-MSTAR TARGET RECOGNITION FOR MULTI-ASPECT SAR IMAGES WITH FUSION STRATEGIES ASWIN KUMAR GUTTA.
L10 – Map labeling algorithms NGEN06(TEK230) – Algorithms in Geographical Information Systems L10- Map labeling algorithms by: Sadegh Jamali (source: Lecture.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Text From Corners: A Novel Approach to Detect Text and Caption in Videos Xu Zhao, Kai-Hsiang Lin, Yun Fu, Member, IEEE, Yuxiao Hu, Member, IEEE, Yuncai.
Wonjun Kim and Changick Kim, Member, IEEE
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
CIVET seminar Presentation day: Presenter : Park, GilSoon.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Course : T Computer Vision
Interest Points EE/CSE 576 Linda Shapiro.
DIGITAL SIGNAL PROCESSING
Mean Shift Segmentation
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
From a presentation by Jimmy Huff Modified by Josiah Yoder
Local Binary Patterns (LBP)
HOW TO MAKE A FROEBEL STAR
Computer and Robot Vision I
Project P06441: See Through Fog Imaging
Bringing Clothing into Desired Configurations with Limited Perception
Presented by Xu Miao April 20, 2005
7th Annual STEMtech conference
Presentation transcript:

Bryan Willimon IROS 2011 San Francisco, California Model for Unfolding Laundry using Interactive Perception

Using a PUMA 500 and 3D simulation software, a piece of laundry is pulled in different directions at various points of the cloth in order to flatten the laundry. Overview

D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008 Previous Related Work on Unfolding Cusano-Towner et al. were aimed at flattening a piece of crumpled clothing by implementing a disambiguation phase and a reconfiguration phase.

First Phase Remove any minor wrinkles and / or folds Pulling the cloth at individual corners every 'd' degrees Second Phase Define cloth model Calculate various components needed for the cloth model Model to Unfold Laundry into a Flat Canonical Position

First Phase Each step of the process is numbered along with each orientation that is used to go from step i to step, as i goes from. In each step, the outer edge of the piece of clothing is grasped and pulled away from the center of the object.

Second Phase Cloth Model Peak RidgeCorner ContinuityCorner LocationsPeak ContinuityDiscontinuity Check

Second Phase Cloth model equation is as follows: represent the coordinates of grasping on the image and represents the orientation at which to pull the object.

Second Phase Peak ridge equation is as follows: where is the value on the depth map ( is the image), and HP is the highest point (i.e. largest intensity value) on the depth map. The equation returns an point (centroid) and the orientation (major/minor vectors) of the peak ridge.

Second Phase Corner continuity equation is as follows: where is a window around pixel of size that finds corner locations, with This equation finds locations of all detected corners and returns the locations in terms of position,, and the orientation,, in which the corner is angled,

Second Phase Corner locations equation is as follows: where “ ” means ”to be on the same continuous surface as”. This equation locate corners on the top surface that is continuous with the and returns a list of corners that are a subset of

Second Phase Peak continuity equation is as follows: where if the corner is on the peak region, and otherwise. The equation determines if a detected corner has a smooth surface or discontinuity close to the peak surface and returns a subset of that contains the locations of corners.

Second Phase Discontinuity equation is as follows: where is the intensity value of a binary image with the locations of discontinuity labeled as 1 and everything else labeled as 0. Discontinuity locations are found by using, a 3 by 3 window surrounding

Second Phase If the difference in intensity change is greater than some predefined threshold,, then the pixel is labeled as an edge (in our experiments, we use pixels). The depth map may also contain areas with continuity regions with a steep slope. The way to get around this problem is to double check all discontinuous areas with a larger window, If the slope is consistent, then it is not discontinuous, otherwise it is.

Experimental Results The proposed approach was applied to a variety of initial configurations of cloths to test its ability to perform under various scenarios using 3D simulation software. We tested our approach on a single washcloth to demonstrate how our algorithm works on a piece of laundry. Five experiments were conducted: 1)Differences between Models of the Nearest Neighbors 2)Experimental Test of Algorithm 3)Taxonomy of Possible Starting Configurations 4)Test to Fully Flatten the Cloth 5)Experiment using PUMA 500

Differences between Models of the Nearest Neighbors : The eight different orientations are consistent of starting from degrees, considering that 0 degrees is pointing to the bottom of the image moving in a counter-clockwise direction, every 45 degrees intervals (i.e. 0, 45, 90, 135, 180, 225, 270, 315 degrees). Experimental Results

Differences between Models of the Nearest Neighbors : The lower the difference value, the more in common the two configurations share in terms of shape space. This histogram is to illustrate how much the cloth configurations can change from pulling from a single point. Experimental Results

Experimental Test of Algorithm : This experiment tested the first phase of the proposed algorithm and monitored the process from eight iterations of pulling the cloth. The models continually change configurations in a manner that flattens and unfolds larger areas of the cloth as the iterations increase. Eventually, the cloth is mostly flattened out to a more recognizable shape in the final iteration. Experimental Results

Experimental Test of Algorithm : The following equation describes how the percentage of flatness is calculated: where is the value on the depth map The overall goal is to achieve 100% flatness for the next step in the laundry process. Experimental Results

Taxonomy of Possible Starting Configurations : The initial and final configurations of three different starting configurations after going through the first phase of the proposed algorithm in eight iterations. Experimental Results The dropped cloth was created by dropping the cloth onto the table from a predefined height. The folded cloth was created by sliding the article across the corner of the table and allowing it to fold on top of itself. The placed cloth was placed on the table from the same position as the dropped cloth.

Test to Fully Flatten the Cloth : This experiment tested the proposed algorithm in determining if this approach would completely flatten a piece of clothing. The test used the first and second phase of the algorithm to grasp the cloth at various locations and moved the cloth at various orientations until the cloth obtained a flattened percentage greater than 95%. Experimental Results

Test to Fully Flatten the Cloth : The percentages of flatness range from The figure below shows the percentage of flatness against all iterations of the algorithm. Experimental Results

Experiment using PUMA 500 : The goal of this experiment is to test the performance of our algorithm in a real world environment using a PUMA 500 manipulator. We used a Logitech QuickCam 4000 for an overhead view to capture the configuration of the cloth. Experimental Results

Experiment using PUMA 500 : The four steps illustrated below show how the robot interacted with the cloth in each iteration. Experimental Results

Conclusion  We have proposed an approach to interactive perception in which a piece of laundry is flattened out into a canonical position by pulling at various locations of the cloth.  The algorithm is shown to provide an initial step in the process of unfolding / flattening a piece of laundry by using features of the cloth.

Questions?