D ATA D RIVEN A TTRIBUTES FOR A CTION R ECOGNITION Week 7 Presented by Christina Peterson.

Slides:



Advertisements
Similar presentations
...visualizing classifier performance in R Tobias Sing, Ph.D. (joint work with Oliver Sander) Modeling & Simulation Novartis Pharma AG 3 rd BaselR meeting.
Advertisements

Adding Unlabeled Samples to Categories by Learned Attributes Jonghyun Choi Mohammad Rastegari Ali Farhadi Larry S. Davis PPT Modified By Elliot Crowley.
Regularized risk minimization
Predicting Risk of Re-hospitalization for Congestive Heart Failure Patients (in collaboration with ) Jayshree Agarwal Senjuti Basu Roy, Ankur Teredesai,
CHAPTER 9: Decision Trees
Lectures 17,18 – Boosting and Additive Trees Rice ECE697 Farinaz Koushanfar Fall 2006.
Probabilistic modelling in computational biology Dirk Husmeier Biomathematics & Statistics Scotland.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Other Classification Techniques 1.Nearest Neighbor Classifiers 2.Support Vector Machines.
ESEM | October 9, 2008 On Establishing a Benchmark for Evaluating Static Analysis Prioritization and Classification Techniques Sarah Heckman and Laurie.
Protein Fold Recognition with Relevance Vector Machines Patrick Fernie COMS 6772 Advanced Machine Learning 12/05/2005.
Machine learning continued Image source:
Optimization Tutorial
1 Test-Cost Sensitive Naïve Bayes Classification X. Chai, L. Deng, Q. Yang Dept. of Computer Science The Hong Kong University of Science and Technology.
Machine Learning & Data Mining CS/CNS/EE 155 Lecture 2: Review Part 2.
Lecture 31: Modern object recognition
Data-driven Visual Similarity for Cross-domain Image Matching
Enhancing Exemplar SVMs using Part Level Transfer Regularization 1.
Industrial Engineering College of Engineering Bayesian Kernel Methods for Binary Classification and Online Learning Problems Theodore Trafalis Workshop.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
SUPPORT VECTOR MACHINES PRESENTED BY MUTHAPPA. Introduction Support Vector Machines(SVMs) are supervised learning models with associated learning algorithms.
Assessing and Comparing Classification Algorithms Introduction Resampling and Cross Validation Measuring Error Interval Estimation and Hypothesis Testing.
Classification and risk prediction
A Data-Driven Approach to Quantifying Natural Human Motion SIGGRAPH ’ 05 Liu Ren, Alton Patrick, Alexei A. Efros, Jassica K. Hodgins, and James M. Rehg.
Classification of Microarray Data. Sample Preparation Hybridization Array design Probe design Question Experimental Design Buy Chip/Array Statistical.
Tutorial 2 LIU Tengfei 2/19/2009. Contents Introduction TP, FP, ROC Precision, recall Confusion matrix Other performance measures Resource.
Classification of Microarray Data. Sample Preparation Hybridization Array design Probe design Question Experimental Design Buy Chip/Array Statistical.
Jeremy Wyatt Thanks to Gavin Brown
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
Crash Course on Machine Learning
Classifiers, Part 3 Week 1, Video 5 Classification  There is something you want to predict (“the label”)  The thing you want to predict is categorical.
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
Step 3: Classification Learn a decision rule (classifier) assigning bag-of-features representations of images to different classes Decision boundary Zebra.
CS55 Tianfan Xue Adviser: Bo Zhang, Jianmin Li.
From Genomic Sequence Data to Genotype: A Proposed Machine Learning Approach for Genotyping Hepatitis C Virus Genaro Hernandez Jr CMSC 601 Spring 2011.
GA-Based Feature Selection and Parameter Optimization for Support Vector Machine Cheng-Lung Huang, Chieh-Jen Wang Expert Systems with Applications, Volume.
Classification Performance Evaluation. How do you know that you have a good classifier? Is a feature contributing to overall performance? Is classifier.
Week 9 Presented by Christina Peterson. Recognition Accuracies on UCF Sports data set Method Accuracy (%)DivingGolfingKickingLiftingRidingRunningSkating.
Stochastic Subgradient Approach for Solving Linear Support Vector Machines Jan Rupnik Jozef Stefan Institute.
C OMPUTATION M ODEL FOR V ISUAL C ATEGORIZATION Bhuwan Dhingra.
Equations Reducible to Quadratic
An Ensemble of Three Classifiers for KDD Cup 2009: Expanded Linear Model, Heterogeneous Boosting, and Selective Naive Bayes Members: Hung-Yi Lo, Kai-Wei.
CSSE463: Image Recognition Day 11 Lab 4 (shape) tomorrow: feel free to start in advance Lab 4 (shape) tomorrow: feel free to start in advance Test Monday.
Christopher M. Bishop, Pattern Recognition and Machine Learning.
Powerpoint Templates Page 1 Powerpoint Templates Scalable Text Classification with Sparse Generative Modeling Antti PuurulaWaikato University.
Concept learning, Regression Adapted from slides from Alpaydin’s book and slides by Professor Doina Precup, Mcgill University.
Biointelligence Laboratory, Seoul National University
E XEMPLAR -SVM FOR A CTION R ECOGNITION Week 11 Presented by Christina Peterson.
Machine Learning for Pedestrian Detection. How does a Smart Assistance System detects Pedestrian?
A New Supervised Over-Sampling Algorithm with Application to Protein-Nucleotide Binding Residue Prediction Li Lihong (Anna Lee) Cumputer science 22th,Apr.
CSSE463: Image Recognition Day 11 Due: Due: Written assignment 1 tomorrow, 4:00 pm Written assignment 1 tomorrow, 4:00 pm Start thinking about term project.
Bayesian decision theory: A framework for making decisions when uncertainty exit 1 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e.
RANKING David Kauchak CS 451 – Fall Admin Assignment 4 Assignment 5.
Bayesian Speech Synthesis Framework Integrating Training and Synthesis Processes Kei Hashimoto, Yoshihiko Nankaku, and Keiichi Tokuda Nagoya Institute.
KAIST TS & IS Lab. CS710 Know your Neighbors: Web Spam Detection using the Web Topology SIGIR 2007, Carlos Castillo et al., Yahoo! 이 승 민.
MACHINE LEARNING 3. Supervised Learning. Learning a Class from Examples Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Gist 2.3 John H. Phan MIBLab Summer Workshop June 28th, 2006.
Canadian Bioinformatics Workshops
Predictive Automatic Relevance Determination by Expectation Propagation Y. Qi T.P. Minka R.W. Picard Z. Ghahramani.
Sparse Kernel Machines
Data Driven Attributes for Action Detection
Asymmetric Gradient Boosting with Application to Spam Filtering
Machine Learning Week 10.
Project 1 Binary Classification
Overview of Machine Learning
Data Driven Attributes for Action Detection
Patricia Butterfield & Naomi Chaytor October 18th, 2017
Exemplar-SVM for Action Recognition
CS639: Data Management for Data Science
Discriminative Training
Presentation transcript:

D ATA D RIVEN A TTRIBUTES FOR A CTION R ECOGNITION Week 7 Presented by Christina Peterson

V ALIDATION /T ESTING S ETS Validation Set: Videos 1 – 4 Test Set: Video 5 Exemplar Set: Videos 6 – 10 Issues on Validation Set The exemplar-SVMs were predicting a negative label for many of the positive samples in the validation set Solutions Regularization Parameters recommended from [1] C 1 =0.5 and C 2 =0.01

P RECISION AND R ECALL tp: true positive fp: false positive fn: false negative

A CTION 1: D IVING -S IDE

A CTION 2: G OLF -S WING

A CTION 3: K ICKING

A CTION 5: R IDING -H ORSE

A CTION 6: R UN -S IDE

A CTION 7: S KATEBOARDING - FRONT

A CTION 9: S WING -S IDEANGLE

P LATT ’ S METHOD USING N EWTON ’ S METHOD WITH BACKTRACKING Lin et al. [2] propose an optimization to Platt’s Method [3] which uses Newton’s method with backtracking line search to solve for alpha and beta Some numerical difficulties with Platt’s Implementation are demonstrated alpha and beta are large p i is near zero or one

R EFERENCES [1] T. Malisiewicz, A. Gupta, and A. A. Efros. Ensemble of exemplar-SVMs for object detection and beyond. ICCV, [2] Hsuan-Tien Lin, Chih-Jen Lin, and Ruby C. Weng. A note on Platt's probabilistic outputs for Support Vector Machines. Mach. Learn., 68(3): , October [3] John C. Platt. Probabilistic outputs for Support Vector Machines and comparisons to regularized likelihood methods. In Advances in large margin classifiers, pages MIT Press, 1999.