V k equals the vector difference between the object and the block across the first and last frames in the image sequence or more formally: Toward Learning.

Slides:



Advertisements
Similar presentations
Detecting Statistical Interactions with Additive Groves of Trees
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Active Appearance Models
Spatio-temporal Databases
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
All materials are based on the following paper Meel Velliste, Sagi Perel, M. Chance Spalding, Andrew S. Whitford and Andrew B. Schwartz, Cortical control.
Perception and Perspective in Robotics Paul Fitzpatrick MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group Goal To build.
Using the Crosscutting Concepts As conceptual tools when meeting an unfamiliar problem or phenomenon.
Social Activity Recognition Using a Wrist-Worn Accelerometer Ashton Maltie UNCC WiNS Lab Ashton Maltie UNCC WiNS Lab.
A vision-based system for grasping novel objects in cluttered environments Ashutosh Saxena, Lawson Wong, Morgan Quigley, Andrew Y. Ng 2007 Learning to.
Psikologi Anak Pertemuan 3 Motor, Sensory, and Perceptual Development.
Patch to the Future: Unsupervised Visual Prediction
Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots Chao-Yeh Chen and Kristen Grauman University of Texas at Austin.
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Processing Digital Images. Filtering Analysis –Recognition Transmission.
Learning to grasp objects with multiple contact points Quoc V. Le, David Kamm, Arda Kara, Andrew Y. Ng.
The C++ Tracing Tutor: Visualizing Computer Program Behavior for Beginning Programming Courses Rika Yoshii Alastair Milne Computer Science Department California.
Redaction: redaction: PANAKOS ANDREAS. An Interactive Tool for Color Segmentation. An Interactive Tool for Color Segmentation. What is color segmentation?
Statistical Learning: Pattern Classification, Prediction, and Control Peter Bartlett August 2002, UC Berkeley CIS.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
1 Presenter: Chien-Chih Chen Proceedings of the 2002 workshop on Memory system performance.
The ‘when’ pathway of the right parietal lobe L. Battelli A. Pascual - LeoneP. Cavanagh.
Protein Tertiary Structure Prediction
CS490D: Introduction to Data Mining Prof. Chris Clifton April 14, 2004 Fraud and Misuse Detection.
TESLA Simple Machines 2010 Levers. The Big Idea of this Investigation A simple machine is a mechanical device that makes work easier by magnifying, modifying,
Recognizing Deformable Shapes Salvador Ruiz Correa Ph.D. Thesis, Electrical Engineering.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
The Whole World in Your Hand: Active and Interactive Segmentation The Whole World in Your Hand: Active and Interactive Segmentation – Artur Arsenio, Paul.
Boris Babenko Department of Computer Science and Engineering University of California, San Diego Semi-supervised and Unsupervised Feature Scaling.
Iowa State University Developmental Robotics Laboratory Unsupervised Segmentation of Audio Speech using the Voting Experts Algorithm Matthew Miller, Alexander.
SHINE SEP Campaign Events: Long-term development of solar corona in build-up to the SEP events of 21 April 2002 and 24 August 2002 A. J. Coyner, D. Alexander,
Today Ensemble Methods. Recap of the course. Classifier Fusion
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
A Novel Local Patch Framework for Fixing Supervised Learning Models Yilei Wang 1, Bingzheng Wei 2, Jun Yan 2, Yang Hu 2, Zhi-Hong Deng 1, Zheng Chen 2.
Bryan Willimon, Stan Birchfield, Ian Walker Department of Electrical and Computer Engineering Clemson University IROS 2010 Rigid and Non-Rigid Classification.
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Chapter 11 Statistical Techniques. Data Warehouse and Data Mining Chapter 11 2 Chapter Objectives  Understand when linear regression is an appropriate.
Interactive Learning of the Acoustic Properties of Objects by a Robot
VIP: Finding Important People in Images Clint Solomon Mathialagan Andrew C. Gallagher Dhruv Batra CVPR
Classification (slides adapted from Rob Schapire) Eran Segal Weizmann Institute.
Autonomous Robots Vision © Manfred Huber 2014.
Computational Approaches for Biomarker Discovery SubbaLakshmiswetha Patchamatla.
Korea University User Interface Lab Copyright 2008 by User Interface Lab Human Action Laws in Electronic Virtual Worlds – An Empirical Study of Path Steering.
Chapter 8 Lossy Compression Algorithms. Fundamentals of Multimedia, Chapter Introduction Lossless compression algorithms do not deliver compression.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
WHAT IS DATA MINING?  The process of automatically extracting useful information from large amounts of data.  Uses traditional data analysis techniques.
Quality Is in the Eye of the Beholder: Meeting Users ’ Requirements for Internet Quality of Service Anna Bouch, Allan Kuchinsky, Nina Bhatti HP Labs Technical.
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
Motion Segmentation at Any Speed Shrinivas J. Pundlik Department of Electrical and Computer Engineering, Clemson University, Clemson, SC.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
We propose an accurate potential which combines useful features HP, HH and PP interactions among the amino acids Sequence based accessibility obtained.
1 Creating Situational Awareness with Data Trending and Monitoring Zhenping Li, J.P. Douglas, and Ken. Mitchell Arctic Slope Technical Services.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Functionality of objects through observation and Interaction Ruzena Bajcsy based on Luca Bogoni’s Ph.D thesis April 2016.
Injong Rhee ICMCS’98 Presented by Wenyu Ren
PROM/SE Ohio Mathematics Associates Institute Spring 2005
Manufacturing system design (MSD)
Image Based Modeling and Rendering (PI: Malik)
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Volume 66, Issue 6, Pages (June 2010)
John H.L. Hansen & Taufiq Al Babba Hasan
Machine Learning for Visual Scene Classification with EEG Data
Incidence and spatial extent of epileptiform events observed in widefield calcium imaging. Incidence and spatial extent of epileptiform events observed.
Volume 23, Issue 3, Pages (April 2018)
FOCUS PRIOR ESTIMATION FOR SALIENT OBJECT DETECTION
A Novel Smoke Detection Method Using Support Vector Machine
Recognizing Deformable Shapes
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Presentation transcript:

v k equals the vector difference between the object and the block across the first and last frames in the image sequence or more formally: Toward Learning to Detect and Use Containers Shane Griffith, Jivko Sinapov, and Alexander Stoytchev, Developmental Robotics Lab, Iowa State University 2. Related Work Grasp points are frequently found on the rim of containers 4. Experimental Setup For each object q in Q 100 experiments K are recorded – 50 for each block p in P; each with the drop point d k Objects are tracked with color during the push state 5. Learning and Prediction K-means with k = 2 is applied to V to separate containment points from non-containment points. 7. Preliminary Learning Results Results for containment prediction with novel drop points among all three containers: 8. Future Work Apply prediction model to cluster containers from non-containers, adding to the current scheme that identifies the container drop area. Process the dataset for other features including: 1. Motivation Kemp et al. (2007) propose a grand challenge for robots to clean and organize a house; a task that relies heavily on a robot’s ability to understand the affordances of containers. Some work has focused on developing accurate 3D models of objects without considering the affordances of them. Saxena et al. (2008) have applied machine learning techniques to find grasp locations on non-3D models. Sinapov and Stoytchev (2008) introduced a framework in which a robot learns tool affordances from interactions between a tool and a puck. Containers are fundamental to almost every aspect of daily life. Little previous work address the problem of learning the properties of containers Containers offer a generalizable paradigm and good starting point toward achieving this goal. Programming a robot with the ability to discover congruency between two objects and how the objects bind to each other is a challenging task. A superquadric The features used to find grasp points may supplement other sensory data for accurate container classification. This framework leads to an accurate and compact model for possible outcomes with each tool used, which can be used to find similarities with other tools. A strict separation between containers and non- containers may be possible with this framework. 3. Infants’ Understanding of Containment The early formation of these (conceptual) object-kinds in infants is theorized to emerge from mechanical properties of objects and events (Leslie, 1994) Characterization of container affordances is a fundamental ability that infants learn through interaction and observation of objects in their environment. (Hespos and Baillargeon, 2005) Casasola et al. (2003) show that infants as young as six months conceptualize containment. Casasola et al.’s results support the argument that infants acquire containment as one of their earliest spatial concepts. 7-dof Barrett WAM arm with the Barrett hand. Sony EVI-D30 camera beside the WAM facing the table. Sequence of behaviors with four states: Three objects, Q, to test for containment and two blocks, P, to grab and drop: Recorded with robot position every 500 Hz is time in nanoseconds and experiment state. Given the output o = {d k, v k } for each experiment performed with an object, apply 10-fold cross-validation and process the data with a linear RBF Network. A value of v k near zero indicates the object and block moved together during the push interaction Robot learns a mapping m to predict the outcome v given a novel d 6. Clustering Results K-means clustering applied to the output metric v shows the area of containment for each container: v k = | Δ X q – Δ X p | + | Δ Y q – Δ Y p | An outcome v k in V is computed for each experiment k For each image i, object q has position X q, Y q and block p has position X p, Y p Histograms for each object show the results of the output metric v. Noise in the clustering data has a few explanations: 1. The block fell to a position outside of the container where the push behavior made the block move with the object 2. The block hit the rim of the container 3. The container occluded the block Black bucket Red bucketPurple bucket Fig. 1: Vector difference histograms for each object q in Q. A high peak near zero indicates containment. The other high peaks indicate only the container moved during the push state. Black bucketRed bucket Blue bucketSquare block Cross block 1. Grab block p3. Push object q2. Drop block p near object q 4. Idle 1. Distances between object, block, and robot 2. Spatiotemporal information 3. Start time of movements 4. Shape and size of both objects Modify the experimental setup in the following ways: 1. Record sound during the drop state 2. Record the amount of force required to push the object 3. Add to the dataset more containers and blocks with a wider range of shapes and sizes For further investigation: Identify strengths and weaknesses of the learning model with the “Mega Blocks” toy set, further improving the learning technique. Give robot the “Baby’s First Blocks” toy when the robot performs well with containment prediction and container classification among dynamic objects. Containment prediction with only a novel drop point, where the robot was trained with data clustered by a simple vector difference metric, is close to 80% accurate. Error is accounted for during the clustering step – many points are grouped with the wrong cluster due to the limited amount of information the k-means algorithm receives. The robot can learn and predict a containment relation significantly better than chance using a small repertoire of babbling interactions. Fig. 1: The block drop points associated with clustered vectors are shown. The green cluster indicates a containment relation; the red cluster indicates non-containment.