Gesture Input and Gesture Recognition Algorithms

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

Applications of one-class classification
Graphical input techniques
Multiclass SVM and Applications in Object Classification
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Hover Widgets: Using the Tracking State to Extend the Capabilities of Pen-Operated Devices Adaptive Systems and Interaction Research Group Microsoft Research.
Searching on Multi-Dimensional Data
SVM—Support Vector Machines
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Computational Methods for Management and Economics Carla Gomes Module 8b The transportation simplex method.
The Disputed Federalist Papers : SVM Feature Selection via Concave Minimization Glenn Fung and Olvi L. Mangasarian CSNA 2002 June 13-16, 2002 Madison,
Delaunay Triangulation on the GPU Dan Maljovec. CPU Delaunay Triangulation Randomized Incremental Algorithm 1.Construct Bounding triangle 2.Choose point.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.)
Real-time Hand Pose Recognition Using Low- Resolution Depth Images
Support Vector Machines Kernel Machines
Support Vector Machines
Learning Table Extraction from Examples Ashwin Tengli, Yiming Yang and Nian Li Ma School of Computer Science Carnegie Mellon University Coling 04.
Face Detection using the Viola-Jones Method
AdvisorStudent Dr. Jia Li Shaojun Liu Dept. of Computer Science and Engineering, Oakland University 3D Shape Classification Using Conformal Mapping In.
XP New Perspectives on Microsoft Access 2002 Tutorial 51 Microsoft Access 2002 Tutorial 5 – Enhancing a Table’s Design, and Creating Advanced Queries and.
Tool for Sketching Statecharts (TSS) Shahla Almasri COMP 762B: Modelling and Simulation Based Design April 4 th, 2005 April 4 th,
Sketch­based interface on a handheld augmented reality system Rhys Moyne Honours Minor Thesis Supervisor: Dr. Christian Sandor.
Boris Babenko Department of Computer Science and Engineering University of California, San Diego Semi-supervised and Unsupervised Feature Scaling.
Graphics A graphics program allows you to combine pictures and text in many different ways. Features – General Level Draw graphics Enter Text Common Tools.
The Disputed Federalist Papers: Resolution via Support Vector Machine Feature Selection Olvi Mangasarian UW Madison & UCSD La Jolla Glenn Fung Amazon Inc.,
1 SUPPORT VECTOR MACHINES İsmail GÜNEŞ. 2 What is SVM? A new generation learning system. A new generation learning system. Based on recent advances in.
Pattern Recognition April 19, 2007 Suggested Reading: Horn Chapter 14.
Support vector machine LING 572 Fei Xia Week 8: 2/23/2010 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A 1.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
Gesture Input and Gesture Recognition Algorithms.
Delaunay Triangulation on the GPU
7th Meeting TYPE and CLICK. Keyboard Keyboard, as a medium of interaction between user and machine. Is a board consisting of the keys to type a sentence.
Gesture recognition techniques. Definitions Gesture – some type of body movement –a hand movement –Head movement, lips, eyes Depending on the capture.
Pen Based User Interface II CSE 481b January 25, 2005.
Support Vector Machines (SVMs) Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
PowerTeacher Gradebook PTG and PowerTeacher Pro PT Pro A Comparison The following slides will give you an overview of the changes that will occur moving.
Another Example: Circle Detection
DATA COLLECTION Data Collection Data Verification and Validation.
Data Mining: Concepts and Techniques
CS262: Computer Vision Lect 06: Face Detection
Working in the Forms Developer Environment
Fun with Hyperplanes: Perceptrons, SVMs, and Friends
Learning to Program D is for Digital.
Clustering of Web pages
The Smarter Balanced Assessment Consortium
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
System Design Ashima Wadhwa.
Instance Based Learning
Alan Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani
Lecture 8:Eigenfaces and Shared Features
The Smarter Balanced Assessment Consortium
By Dr. Abdulrahman H. Altalhi
Basic machine learning background with Python scikit-learn
To find the surface area of a cuboid
KD Tree A binary search tree where every node is a
EdgeWrite Cole Gleason
The Smarter Balanced Assessment Consortium
Tutorial 3 – Querying a Database
The Smarter Balanced Assessment Consortium
Exercise 8 – Software skills
Topic 14: Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007
The Smarter Balanced Assessment Consortium
University of Wisconsin - Madison
Pattern Recognition and Training
CAMCOS Report Day December 9th, 2015 San Jose State University
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
The Smarter Balanced Assessment Consortium
Pattern Recognition and Training
Presentation transcript:

Gesture Input and Gesture Recognition Algorithms

A few examples of gestural interfaces and gesture sets (Think about what algorithms would be necessary to recognize the gestures in these examples.)

a) rectangle b) ellipse c) line d) group e) copy f) rotation g) delete (“x”) Rubine, “Specifying gestures by example”, SIGGRAPH 1991, https://scholar.google.ca/scholar?q=rubine+specifying+gestures+example

https://scholar. google. ca/scholar https://scholar.google.ca/scholar?q=wobbrock+wilson+gestures+without+libraries+toolkits+training+recognizer Wobbrock, Wilson, and Li. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. UIST 2007

Graffiti Pilot graffiti reference card: http://www.palminfocenter.com/news/8493/pilot-1000-retrospective/ Tall image showing multiple ways to enter certain characters: http://www.computerhope.com/jargon/g/graffiti.htm 5

EdgeWrite ( http://depts.washington.edu/ewrite/ ) A mechanical way to simplify gesture recognition, using physical constraints

EdgeWrite How can we algorithmically distinguish these gestures? Answer: find the order in which “corners” (triangular subregions) are visited, and look up the sequence in a dictionary

Tivoli Moran, Chiu, van Melle, and Kurtenbach. Implicit Structure for Pen-based Systems within a Freeform Interaction Paradigm, CHI 1995 https://scholar.google.ca/scholar?q=moran+chiu+kurtenbach+implicit+structure+pen-based+systems+freeform+interaction

Tivoli Moran, Chiu, van Melle, and Kurtenbach. Implicit Structure for Pen-based Systems within a Freeform Interaction Paradigm, CHI 1995 https://scholar.google.ca/scholar?q=moran+chiu+kurtenbach+implicit+structure+pen-based+systems+freeform+interaction

How does Tivoli detect rows and columns within sets of ink strokes? Answer on next slide…

Tivoli Moran, Chiu, van Melle, and Kurtenbach. Implicit Structure for Pen-based Systems within a Freeform Interaction Paradigm, CHI 1995 https://scholar.google.ca/scholar?q=moran+chiu+kurtenbach+implicit+structure+pen-based+systems+freeform+interaction

Hierarchical Radial Menu From Gord Kurtenbach’s PhD thesis 12

Combination rectangle + lasso selection Remember! Question: how can an algorithm distinguish between the gesture on the left and the one on the right? Answer: check if (length_of_ink_trail) ÷ (straight_line_distance_from_start_to_end_of_drag) > 2.5

Gesture recognition algorithms

How would we algorithmically distinguish Marking Menu strokes How would we algorithmically distinguish Marking Menu strokes? (class discussion) From Gord Kurtenbach’s PhD thesis

How do we find a “corner” in an ink stroke? What about when the stroke is noisy? What about when there is sporadic noise? (see written notes about filtering out noise)

Dabbleboard https://www.youtube.com/watch?v=5kZDqiH_nGM

Dabbleboard https://www.youtube.com/watch?v=5kZDqiH_nGM

Dabbleboard https://www.youtube.com/watch?v=5kZDqiH_nGM Click + typing: entering text Click + click + drag: rectangle selection Widgets for moving, duplicating, deleting, resizing

Web browser http://dolphin.com/

Samsung Galaxy Note

How can we allow a user (or designer) to define new gestures without writing code ? Specify new gestures with examples! Requires performing some kind of “pattern matching” between the pre-supplied example gestures, and each gesture entered during interaction

Rubine, “Specifying gestures by example”, SIGGRAPH 1991, https://scholar.google.ca/scholar?q=rubine+specifying+gestures+example

Rubine, “Specifying gestures by example”, SIGGRAPH 1991, https://scholar.google.ca/scholar?q=rubine+specifying+gestures+example

Gesture recognition with Rubine’s algorithm (1991) Remember! Each gesture entered (and each example gesture) is reduced to a feature vector and corresponds to a point in some multidimensional space. We need some way to classify these points among the classes of gestures. Successful recognition rate measured by Rubine > 95%

Rubine (1991) https://scholar. google. ca/scholar

Each gesture corresponds to a vector (or a multidimensional point) Each gesture corresponds to a vector (or a multidimensional point). Here, the green points are examples of gestures of one class, red points are another class. How do we classify the gesture 𝑔 whose position is marked with an “X” below? 𝑣 𝑐,𝑒 =( 𝑣 𝑐,𝑒,1 ,…, 𝑣 𝑐,𝑒,𝐹 ) 𝑔 =( 𝑔 1 ,…, 𝑔 𝐹 ) feature vector of an exemple gesture to classify

1st solution: compare the distances between the new gesture and each example ("nearest neighbor" search) - How do we calculate this distance? - How much time will this take? (Assume F features (i.e., an F-dimensional space), C classes (or kinds of gestures), and E examples per class). Remember! 𝑣 𝑐,𝑒 =( 𝑣 𝑐,𝑒,1 ,…, 𝑣 𝑐,𝑒,𝐹 ) 𝑔 =( 𝑔 1 ,…, 𝑔 𝐹 )

Distance between the gesture to classify and an exemple Remember! 𝑔 − 𝑣 𝑐,𝑒 2 = 𝑓=1 𝐹 𝑔 𝑓 − 𝑣 𝑐,𝑒,𝑓 2 = 𝑔 1 − 𝑣 𝑐,𝑒,1 2 +…+ 𝑔 𝐹 − 𝑣 𝑐,𝑒,𝐹 2

2nd solution: pre-calculate the centroid of each class of examples ("k-means") - How do we pre-calculate these centroids? - How much time will this take? - How do we then classify a new gesture? - How much time will this take? 𝑥 𝑐 =( 𝑥 𝑐,1 ,…, 𝑥 𝑐,𝐹 ) centroid Remember! 𝑣 𝑐,𝑒 =( 𝑣 𝑐,𝑒,1 ,…, 𝑣 𝑐,𝑒,𝐹 ) 𝑔 =( 𝑔 1 ,…, 𝑔 𝐹 )

Calculating a centroid: 𝑥 𝑐 = 1 𝐸 𝑒=1 𝐸 𝑣 𝑐,𝑒 Remember! Distance between the gesture to classify and a centroid: 𝑔 − 𝑥 𝑐 2 = 𝑓=1 𝐹 𝑔 𝑓 − 𝑥 𝑐,𝑓 2

3rd solution (proposed by Rubine): pre-calculate hyperplanes to separate the examples (Support Vector Machine or SVM). See his paper for details. Below, an example of a case where SVM hyperplanes do a better job than centroids. K-means centroids would classify the point at “X” as red, but SVM would classify it as green. (In practice, such cases may be rare, and the extra complexity of programming SVM might not be worth the bother.) Remember! Solid line: hyperplane that separates the red class from other examples Dashed line: median between the centroids

We have F features (i.e., an F-dimensional space), C classes (or kinds of gestures), and E examples per class Remember! Time for pre-processing Time to classify a gesture Reliable? 1. Nearest neighbor n/a O( C E F ) always 2. k-means centroids O(C E F) to compute the centroids O( C F ) if the examples are linearly separable AND each class has approximately the same variance 3. Rubine’s SVM hyperplanes Depends on the implementation. One iterative algorithm takes O( (number of iterations) C2 E F ) to find good hyperplanes if the examples are linearly separable Notes: approach 3 is the most complicated to program, while being slower than approach 2 and less reliable than approach 1. So, I recommend trying approaches 1 or 2 before trying approach 3.

Gesture recognition with the “$1” algorithm (Wobbrock et al $1 doesn’t use feature vectors; instead, it compares the geometry of a gesture with the geometry of each example, computing the point-by-point difference. This is easiest to do if all gestures have the same number of points. In Wobbrock et al.’s 2007 article, the $1 approach is presented as one that only uses simple math operations, is easy to implement without libraries, and is fast, however this is in comparison to Rubine’s SVM hyperplane approach. If we simplify Rubine’s approach to classify feature vectors with nearest neighbor or k-means (as shown previously in the slides), then the feature vector approach becomes just as easy to implement and possibly faster than $1. $1’s successful recognition rate is superior to Rubine’s, as measured by Wobbrock et al. $1 involves a key step: resampling the gesture, so that the gesture and the examples all have the same number of points. Time to classify a gesture: O( C E N ), where C is number of classes, E is number of examples per class, and N is number of points per example (see written notes for more details) Remember!

Wobbrock, Wilson, and Li. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. UIST 2007 https://scholar.google.ca/scholar?q=wobbrock+wilson+gestures+without+libraries+toolkits+training+recognizer

Wobbrock, Wilson, and Li. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. UIST 2007 https://scholar.google.ca/scholar?q=wobbrock+wilson+gestures+without+libraries+toolkits+training+recognizer

Wobbrock, Wilson, and Li. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. UIST 2007 https://scholar.google.ca/scholar?q=wobbrock+wilson+gestures+without+libraries+toolkits+training+recognizer

Wobbrock et al. (2007) http://doi.acm.org/10.1145/1294211.1294238 Wobbrock, J. O., Wilson, A. D., and Li, Y. 2007. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In Proceedings of the 20th Annual ACM Symposium on User interface Software and Technology (Newport, Rhode Island, USA, October 07 - 10, 2007). UIST '07. ACM, New York, NY, 159-168. DOI= http://doi.acm.org/10.1145/1294211.1294238