VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.

Slides:



Advertisements
Similar presentations
SEMINAR ON VIRTUAL REALITY 25-Mar-17
Advertisements

A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
Automatic Color Gamut Calibration Cristobal Alvarez-Russell Michael Novitzky Phillip Marks.
Results/Conclusions: In computer graphics, AR is achieved by the alignment of the virtual camera with the actual camera and the virtual object with the.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Multimedia Specification Design and Production 2012 / Semester 1 / week 6 Lecturer: Dr. Nikos Gazepidis
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Patch to the Future: Unsupervised Visual Prediction
1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
Automatic in vivo Microscopy Video Mining for Leukocytes * Chengcui Zhang, Wei-Bang Chen, Lin Yang, Xin Chen, John K. Johnstone.
Eurohaptics 2002 © Interactive Haptic Display of Deformable Surfaces Based on the Medial Axis Transform Jason J. Corso, Jatin Chhugani,
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
International Conference on Image Analysis and Recognition (ICIAR’09). Halifax, Canada, 6-8 July Video Compression and Retrieval of Moving Object.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Virtual Reality. What is virtual reality? a way to visualise, manipulate, and interact with a virtual environment visualise the computer generates visual,
Distinguishing Photographic Images and Photorealistic Computer Graphics Using Visual Vocabulary on Local Image Edges Rong Zhang,Rand-Ding Wang, and Tian-Tsong.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
High-Quality Video View Interpolation
3D Augmented Reality for MRI-Guided Surgery Using Integral Videography Autostereoscopic Image Overlay Hongen Liao, Takashi Inomata, Ichiro Sakuma and Takeyoshi.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Cindy Song Sharena Paripatyadar. Use vision for HCI Determine steps necessary to incorporate vision in HCI applications Examine concerns & implications.
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
Tracking Video Objects in Cluttered Background
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Computer-Based Animation. ● To animate something – to bring it to life ● Animation covers all changes that have visual effects – Positon (motion dynamic)
Overview and Mathematics Bjoern Griesbach
VINCENT URIAS, CURTIS HASH Detection of Humans in Images Using Skin-tone Analysis and Face Detection.
ICBV Course Final Project Arik Krol Aviad Pinkovezky.
1 Applying Vision to Intelligent Human-Computer Interaction Guangqi Ye Department of Computer Science The Johns Hopkins University Baltimore, MD
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Supporting Beyond-Surface Interaction for Tabletop Display Systems by Integrating IR Projections Hui-Shan Kao Advisor : Dr. Yi-Ping Hung.
3D Fingertip and Palm Tracking in Depth Image Sequences
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Landing a UAV on a Runway Using Image Registration Andrew Miller, Don Harper, Mubarak Shah University of Central Florida ICRA 2008.
A General Framework for Tracking Multiple People from a Moving Camera
3D SLAM for Omni-directional Camera
Takuya Matsuo, Norishige Fukushima and Yutaka Ishibashi
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Automated Reconstruction of Industrial Sites Frank van den Heuvel Tahir Rabbani.
Virtual Reality Lecture2. Some VR Systems & Applications 고려대학교 그래픽스 연구실.
Image Pool. (a)(b) (a)(b) (a)(c)(b) ID = 0ID = 1.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
1 Haptic Systems Mohsen Mahvash Lecture 3 11/1/06.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Non-Photorealistic Rendering and Content- Based Image Retrieval Yuan-Hao Lai Pacific Graphics (2003)
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
1 Developing a 2.5-D video avatar Tamagawa, K.; Yamada, T.; Ogi, T.; Hirose, M.; Signal Processing Magazine, IEEE Volume 18, Issue 3, May 2001 Page(s):35.
Su-ting, Chuang 2010/8/2. Outline Introduction Related Work System and Method Experiment Conclusion & Future Work 2.
By Naveen kumar Badam. Contents INTRODUCTION ARCHITECTURE OF THE PROPOSED MODEL MODULES INVOLVED IN THE MODEL FUTURE WORKS CONCLUSION.
Su-ting, Chuang 2010/8/2. Outline Introduction Related Works System and Method Experiment Conclusion & Future Work 2.
Su-ting, Chuang 1. Outline Introduction Related work Hardware configuration Detection system Optimal parameter estimation framework Conclusion 2.
IEEE International Conference on Multimedia and Expo.
Presented by: Idan Aharoni
Su-ting, Chuang 1. Outline Introduction Related work Hardware configuration Finger Detection system Optimal parameter estimation framework Conclusion.
Electronic Visualization Laboratory University of Illinois at Chicago Programming the Personal Augmented Reality Immersive System (PARIS) Chris Scharver.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
CIRP Annals - Manufacturing Technology 60 (2011) 1–4 Augmented assembly technologies based on 3D bare-hand interaction S.K. Ong (2)*, Z.B. Wang Mechanical.
Introduction to Computer Haptics Chris Harding
Summary of “Efficient Deep Learning for Stereo Matching”
Estimation of Skin Color Range Using Achromatic Features
Presentation transcript:

VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision

Agenda Introduction Design & Implementation –Vision Subsystem –World Subsystem –Haptic Subsystem Experimental Results Future Work Conclusions

Design & Implementation

Pentium III PC Linux OS SRI Small Vision System (SVS) SVS with STH-MDCS stereo head PHANToM Premium 1.0A (SensAble Technologies)

Vision Subsystem

Purpose: –track user’s finger –provide 3D information & video to world Appearance-based Hand Segmentation Fingertip Detection and Tracking VisHap Implementation: –Assume user is interacting using a single finger –Perform finger tracking on the left color image –Compute 3D position of the finger in the coordinate system of the left camera

Vision Subsystem: Appearance-based hand segmentation Basic idea –split image into small tiles & build hue histogram Start images –on-line learning procedure Future images –build histogram and carry out pair wise histogram comparison with background model

Vision Subsystem: Appearance-based hand segmentation Colour appearance model of human skins Collect training data Convert pixels from RGB to HSV colour space Learn single Gaussian model of hue distribution Perform check on foreground pixels (filter out non- skin points) Remove noise with median filter operation background image foreground image segmentation result

Vision Subsystem: Fingertip Detection & Tracking Detect finger by exploiting geometrical property Use cylinder with hemispherical cap to approximate shape of finger Radius of sphere corresponding to fingertip (r) is approximately proportional to reciprocal of depth of fingertip with respect to the camera (z) r = K/z Series of criteria checked on candidate fingertips to filter out false fingertips

Algorithm outputs multiple candidates around true location Select candidate with highest score to be the fingertip Kalman Filter to predict position of fingertip (real time tracking) Vision Subsystem: Fingertip Detection & Tracking

World Subsystem

Purpose: –Perform 3D vision/haptics registration –Scene rendering –Notify haptic device about imminent interaction System calibration (SVS and PHANToM 1.0) –Move haptic device around in field of view of camera –Record more than 3 pairs of coordinates in camera and haptic frames –Calculate optimal absolute orientation solution

World Subsystem Implement Interaction Properties –Database of various interaction modes and object surface properties Example: –Interaction with virtual wall Interaction property: slide along –Interaction with button Interaction property: click

Haptic Subsystem

Purpose: –Simulate touching experience Presents suitable force feedback to fingertip Control scheme: –Control Law –Gravity Compensation for PHANToM –Interaction with Objects

Haptic Subsystem: Control Law for Haptic Device Control Law based on error space to guide haptic device to target position Low pass filter to achieve smooth control and remove high frequency noise or in time space

Motor torques needed to counteract the wrench applied to the manipulator Total torque caused by gravity of all parts of device Smooth and stable trajectory tracking Haptic Subsystem: Gravity Compensation for PHANToM

Haptic Subsystem: Interaction with Objects Simulate interaction forces by adjusting force gain according to object properties and interaction mode. Transform converts object’s gain matrix to that of haptic device: VisHap Implementation: –Defined O Λ gain as a diagonal matrix with λ x, λ y, λ z its diagonal elements –Z-axis of objects frame is along normal of object’s surface

Haptic Subsystem: Interaction with Objects Example: –Object: button or keyboard key –Destination: center of button’s surface at initial contact –Enter object: user pushes button down increase λ z to proper value to simulate button resistance –Adjust destination point of haptic device to surface of bottom board of the button & increase λ z to larger value Relationship of force gain λ z and depth d of fingertip under the surface of a button.

Foreground segmentation: used 1 st 10 frames to learn appearance model of background Hue histograms of 8 bins for each 5 x 5 image patch Test algorithm: record image pairs of background and foreground Evaluate scheme: compare segmentation result and ground truth classification image Tested 26 pairs of images : –Average correct ratio: 98.16% –Average false positive ratio: 1.55% –False negative ratio: 0.29% Experimental Results

Virtual Environment –virtual plane in space Interactions –user moves finger to interact with plane –user moves finger to press fixed button VisHap is capable of automatically switching interaction objects according to the scene configuration and current fingertip position. Haptic force feedback along normal of object surface and the distance of the fingertip to the object. Experimental Results

Future Work Head mounted display (HMD) –to achieve higher immersive ness and fidelity Extend virtual environment –incorporate richer sets of objects and interaction modes

Conclusions Generate “complete” haptic experience Modular framework: –computer vision –haptic device –augmented environment model Experimental results justify design Experimental results show flexibility and extensibility of framework