Presentation is loading. Please wait.

Presentation is loading. Please wait.

Yuan Li, Chang Huang and Ram Nevatia

Similar presentations


Presentation on theme: "Yuan Li, Chang Huang and Ram Nevatia"— Presentation transcript:

1 Yuan Li, Chang Huang and Ram Nevatia
Learning to Associate: HybridBoosted Multi-Target Tracker for Crowded Scene Yuan Li, Chang Huang and Ram Nevatia 利用machine learning的方式來實作軌跡的連接,產生track的效果 cvpr2009 University of Southern California南加州大學 再iris作computer vision research

2 Yuan Li

3 Outline introduction Related work MAP formulation Affinity model
Results Conclusion 3. 4. MAP formulation for Tracklet Association利用事後最大機率,找出最好的軌跡 Affinity model for association 我們training的方式,建出affinity model,來找出最有可能的連接方式

4

5

6 overview

7 STAGE 1 STAGE 2 STAGE 3 STAGE 4

8 Introduction learning-based hierarchical approach of multi-target tracking HybridBoost algorithm-hybrid loss function association of tracklet is formulated as a joint problem of ranking and classification 這篇paper是融合了machine learning的方式,分很多個stage 一直蓮一直連連道變成一個軌跡 Hybridboost 是一種DAT(data association based tracking)

9 ranking the ranking part aims to rank correct tracklet associations higher than other alternatives

10 classification the classification part is responsible to reject wrong associations when no further association should be done 防錯 軌跡該斷沒斷

11 HybridBoost combines the merits of the RankBoost algorithm and the AdaBoost algorithm . Rankboost The basic idea of RankBoost is to formalize learning to rank as a problem of binary classification on instance pairs, and then to adopt boosting approach. Like all boosting algorithms, RankBoost trains one weak ranker at each round of iteration, and combines these weak rankers as the final ranking function. After each round, the document pairs are re-weighted: it decreases the weight of correctly ranked pairs and increases the weight of wrongly ranked pairs.

12 adaboost Boosting就是每次挑一個比普通好一點的演算法(weak learner)
一次一次的利用update weight 的方式降低整體的error rate Rankboost adaboost其實很像 都是一種二元的分類方式 他們都有一個共同的宗旨 就是在training的時候,對上一輪比錯的sample的weight進行調升, 而比對的sample的weight進行調降 這樣就可以在之後的幾輪裡,更有可能把之前分類的錯誤進行調整,改正 Adaboost在這篇paper裡,就是將對的跟錯的連接方式進行分類 Rankboost則是參照ada動態調整資料的權重,將排名問題轉成二元排序 X1~xm

13 RankBoost Rankboost則是參照ada動態調整資料的權重,將排名問題轉成二元排序 比較兩個事件
每次對一個weak ranker作training 每次training 家大比錯的那些pair的weight 減少比對的那些pair的weight learning to rank as a problem of binary classification on instance pairs trains one weak ranker at each round of iteration //re-weighted: it decreases the weight of correctly ranked pairs and increases the weight of wrongly ranked pairs.??

14 Related work the earliest works look at a longer period of time in contrast to frame-by-frame tracking. To overcome this, a category of Data Association based Tracking algorithm there has been no use of machine learning algorithm in building the affinity model. 早期的work都是看一段很長的時間來做tracking的動作 但是這樣search space會很大 所以我們就導入了DAT algorithm 但是其中索取的feature threshold & weight通常都是用人工去設定 那我這篇paper的重點,就是運用machine learning的方式,training出affinity model,這樣所做出來的結果,也比人工定義的還準 Search space 大 有些用dp, linear programming, min-cost flow ,hungarian

15 MAP formulation Robust Object Tracking by Hierarchical Association of Detection Responses ours

16 MAP formulation v1 R = {ri} the set of all detection responses
第八篇paper Low level 造出最基本的tracklet

17 MAP formulation v1(cont.)
tracklet association Middle level 把low level連成的tracklet們來做association 把它轉成map的問題(maximum a posterior) 又因為假設input tracklet 和 S(tracklet association set)條件獨立 就可以用貝式定理轉右邊 tracklet associations {Sk} are independent of each other.轉下面

18 MAP formulation v1(cont.)
P+ 是true detection真的是人 而且是第K個人 P- 是false alarm 偵測錯誤

19 MAP formulation v2 定義符號不一樣 Tk是 tracklet association set 為了得到最佳的連法 轉map
Tk*=連乘false alarm *********連乘所有對的 P是這條的開頭*這個tracklet是對的tracklet的可能性….到term

20 MAP formulation v2(cont.)
Inner cost Transition cost

21 MAP formulation v2(cont.)
With these ,we can rewrite it

22 Affinity model Hybridboost algorithm Feature pool and weak learner
Training process 4.

23 Hybridboost algorithm
Ie. T2 T1 T3 Ranker H

24 Hybridboost algorithm(cont.)
因為hybridboost是由ada 跟rank組成的 所以需要train他們兩個的training set Ranking boost的training set Binary boost的training set R是後面的連法比前面有可能 B是這種連法對不對(training時是跟ground truth。比 tracking時是由weak learner來決定)

25 Loos function initial Hybrid boost 的loss function
就是由Ranking boost , binary boost的linear combination組合而成 Zt 則是經由每個round(對每個feature和threshold)作training不斷的update αt ht 還有wt(xj, yj) 來計算H =strong ranking classifier

26 Strong ranking classifier
weak weak weak weak Update sample weight Update weight Update weight

27 Hybridboost algorithm
Table1 這是對一個stage裡 分成n個iteration(round)=train weak ranker 分別計算可使Z(loos function)最小的αt ht 做完所有的round(100次)後 得出1個stage最後的strong classifier ranker

28 Weak ranking classifier
Feature & threshold

29

30 Feature pool and weak learner
Table2 在每個round 都對個feature 和不同的threshold 作minimize Z H二分法 W是上個round update來的 ˆα則是可使Z最小的α 最後回傳最好的α: h∗(x) and α∗.

31 Training process T:tracklet set from the previous stage
G:groundtruth track set Φ是一個function 告訴上個stage的軌跡是不是和真實的一樣 有沒有標錯

32 Training process (cont)
For each Ti ∈ T, if connecting Ti’s tail to the head of some other tracklet

33 Training process (cont)
connecting Ti’s head to the tail of some other tracklet before Ti which is also matched to G

34 Ranking sample set

35 Binary sample set

36 Training process (cont.)
use the groundtruth G and the tracklet set Tk−1 obtained from stage k − 1 to generate ranking and binary classification samples learn a strong ranking classifier Hk by the HybridBoost algorithm Using Hk as the affinity model to perform association on Tk−1 and generate Tk 這三部作k次 得到最後的結果 tracker!

37 Experimental results Implementation details Evaluation metrics
Analysis of the training process Tracking performance

38 Implementation details
dual-threshold strategy to generate short but reliable tracklets four stages of association maximum allowed frame gap 16, 32, 64 and 128 a strong ranking classifier H with 100 weak ranking classifiers Β=0.75 ζ = 0

39 Evaluation metrics Recall:跟GT比 抓對的比例 Precision :跟output比
Frag:GT的軌跡裡,斷了幾次,換了幾次 tracking ID 影片的人物軌跡中,被track成別人的次數 IDS:track出來的軌跡裡,人物變了幾次

40 track fragments &ID switches
Traditional ID switch:“two tracks exchanging their ids”. ID switch : a tracked trajectory changing its matched GT ID track fragments:more strict Ids 換了gt的才算

41 compare IDS:track出來的軌跡裡,人物變了幾次 Frag:GT的軌跡裡,斷了幾次,換了幾次 tracking ID
trk1=GT1->GT2 trk2=GT2->GT1 FRA=2 GT1=trk1->trk2 GT2=trk2->trk1 (B) IDS=1 Trk1=GT1->GT2 GT1=trk1->trk3

42 Best features Motion smoothness (feature type 13 or 14)
color histogram similarity (feature 4) number of miss detected frames in the gap between the two trackelts (feature 7 or 9).

43 Strong ranking classifier output
把training的情況拿出來討論 Appearance similarity 一開始很嚴 ,要顏色很近才能相連(weight很大),但後來越蓮越長,因為光影的變化,所以這項constraint會放寬(weight降低),來連接後面stage的tracklet Motion smooth越來越不重要,因為後來都轉到motion smoothness in ground plane.

44 Choice of β

45 Tracking performance

46 Conclusion and future work
Use HybridBoost algorithm to learn the affinity model as a joint problem of ranking and classification The affinity model is integrated in a hierarchical data association framework to track multiple targets in very crowded scenes.

47

48 Thank you

49 problem tracklet ?affinity model?圓圈?路徑?
automatically select among various features and corresponding non-parametric models? Rankboost ? Adaboost? 匈牙利演算法


Download ppt "Yuan Li, Chang Huang and Ram Nevatia"

Similar presentations


Ads by Google