Download presentation
Presentation is loading. Please wait.
Published byLawrence Davidson Modified over 8 years ago
1
Behavior Recognition Based on Machine Learning Algorithms for a Wireless Canine Machine Interface Students: Avichay Ben Naim Lucie Levy 14 May, 2014 Ort Braude College – SE Dept.
2
Motivation Definitions Description of the research Useful algorithms The classification flow ConclusionsAgenda
3
It’s all about a canine training Bloodhound Watchdogs Guide dog Police dog Search and rescue dog
4
Definitions Inertial Measurement Unit (IMU) The gyroscope The accumulator moving average filter decision tree
5
Inertial Measurement Units
6
The gyroscope Etymology Description and diagram http://en.wikipedia.org/wiki/File:Gyroscope_wheel_animation.gif http://en.wikipedia.org/wiki/File:Gyroscope_wheel_animation.gif http://en.wikipedia.org/wiki/File:Gyroscope_operation.gif http://en.wikipedia.org/wiki/File:Gyroscope_operation.gif Modern Uses
7
moving average filter Statistical tool. analyze data points. A moving average is commonly used with time series.
8
decision tree Decision support tool. Decision trees are commonly used in decision analysis.
9
Useful algorithms Forward algorithm Viterbi algorithm Baum-Welch algorithm C4.5 algorithm Hidden Markov Model (HMM)
10
The state is directly visible to the observer The state transition probabilities are the only parameters H C Markov chain
11
Hidden Markov Model The state is not directly visible The output, dependent on the state, is visible. Each state has a probability distribution over the possible output tokens.
12
Hidden Markov Model
13
Three problems in HMM and their solutions Problem 1: Given a certain Markov model what is the probability of a certain sequence of observation. Solution 1: Forward algorithm. Problem 2: Given a certain sequence of observation and a certain Markov model what is the most possible sequence of states that create this sequence of observation. solution 2: Viterbi algorithm.
14
Forward algorithm Solve the first problem of HDD. Algorithm input: HMM. Algorithm output: probability of a certain sequence of observation. Equations:
15
Forward algorithm- Example Calculate the probability of :
16
Viterbi algorithm Dynamic programming algorithm. Find the most likely sequence of states from observed events. Equations: http://upload.wikimedia.org/wikipedia/commons/7/73/Viterbi_ animated_demo.gif http://upload.wikimedia.org/wikipedia/commons/7/73/Viterbi_ animated_demo.gif
17
Baum-Welch algorithm
18
Baum-Welch algorithm- Example To start we first guess the transition and emission matrices: The next step is to estimate a new transition matrix:
19
Baum-Welch algorithm- Example The new estimate for the S1 to S2 transition is now Then calculate the S2 to S1, S2 to S2 and S1 to S1probabilities.
20
Baum-Welch algorithm- Example Next, we want to estimate a new emission matrix : The new estimate for the E coming from S1 emission is now
21
Baum-Welch algorithm- Example calculate all the emission matrix. estimate the initial probabilities
22
C4.5 Algorithm C4.5 is an extension of ID3 algorithm The algorithm used to generate a decision tree greedy approach selecting the best attribute to split the dataset on each iteration Local worst splitBest split Good split
23
Entropy Normalized Information Gain Gini coefficient splitting criterions
24
Entropy H(S) is a measure of the amount of uncertainty in the set S. Where: S-The current set for which entropy is being calculated. X-set of classes in S. P(t)-The proportion of the number of elements in class X to the number of elements in set S. Where H(S)=0 the set S is perfectly classified. splitting criterion-Entropy
25
Example
26
IG(A) is the measure of the difference in entropy from before to after the set S is split on the attribute A. Where: H(S) is the entropy of set S. T-The subsets created from splitting set S by attribute A such that P(t)-The proportion of the number of elements in t to the number of elements in set S H(t)-entropy of the subset t. splitting criterion-Information Gain
27
splitting criterion-Gini coefficient The Gini coefficient measures the inequality among values of a frequency distribution. A Gini coefficient of zero expresses perfect equality, where all values are the same
28
It iterates through every unused attribute of the set S. calculates the entropy- H(S) or the Information gain- IG(A) of the attribute. Selects the attribute which has the smallest entropy or largest information gain value. pruning C4.5 Algorithm
29
The c’BAN Wireless communication device Wireless sensor platform Remote computational node
30
Data Collection protocol Five Labrador Retrievers Four different sensor sites: rump chest abdomen back
31
Description of the research CommentsThe BehaviorsNumber of repetitions Type of activity The dogs returned to a standing position between repetitions sitting, standing, lying down, eating off ground and standing on two legs 5static The dogs walk back to the starting position between repetitions walk up the stairs, walk across a platform to the ramp, walk down the ramp 3dynamic
32
The findings representative sample of acceleration data from the x-axis of the accelerometer in each of the four locations
33
The classification flow
34
stage 1 1. HMMs, Viterbi algorithm and Baum-Welch algorithm was used to identify each of the dynamic activities.
35
stage 2 2. decides if the activity is Dynamic or not.
36
stage 3 and 4 and 5 3. moving average filter. 4. Decision tree was used to distinguish between transitions and postures. 5. decides if the activity is a posture or a transition.
37
stage 6 and 7 6. Decision tree was used to classify the specific postures. 7. Finally the algorithm find the specific postures.
38
Conclusions
39
THANK YOU!
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.