Presentation is loading. Please wait.

Presentation is loading. Please wait.

Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 1 CMU TEAM-A in TDT 2004 Topic Tracking Yiming.

Similar presentations


Presentation on theme: "Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 1 CMU TEAM-A in TDT 2004 Topic Tracking Yiming."— Presentation transcript:

1 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 1 CMU TEAM-A in TDT 2004 Topic Tracking Yiming Yang School of Computer Science Carnegie Mellon University

2 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 2 CMU Team A –Jaime Carbonell (PI) –Yiming Yang (Co-PI) –Ralf Brown –Jian Zhang –Nianli Ma –Shinjae Yoo –Bryan Kisiel, Monica Rogati, Yi Chang

3 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 3 Participated Tasks in TDT 2004  Topic Tracking (Nianli Ma et al.)  Supervised Adaptive Tracking (Yiming Yang et al.)  New Event Detection (Jian Zhang et al.)  Link Detection (Ralf Brown)  Hierarchical Topic Detection – not participated

4 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 4 Topic Tracking with Supervised Adaptation (“Adaptive Filtering” in TREC) On-topic Test documents Current document Training documents (past) time Off-topic Unlabeled documents Topic 1 Topic 2 Topic 3 … Relevance Feedback

5 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 5 Topic Tracking with Pseudo-Relevance (“Topic Tracking” in TDT) On-topic? Test documents Current document Training documents (past) time Off-topic Unlabeled documents Topic 1 Topic 2 Topic 3 … Pseudo-Relevance Feedback (PRF)

6 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 6 Adaptive Rocchio with PRF Conventional version Improved version

7 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 7 Rocchio in Tracking on TDT 2003 Data Weighted PRF reduced Ctrk by 12%. Ctrk: the cost of tracking, i.e., the harmonic average of miss rate and false alarm rate

8 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 8 Primary Tracking Results in TDT 2004

9 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 9 DET Curves of Methods on TDT 2004 Data Charles’ target

10 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 10 Supervised Adaptive Tracking “Adaptive filtering” in TREC (since 1997) –Rocchio with threshold calibration strategies (Yang et al., CIKM 2003) –Probabilistic models assuming Gaussian/exponential distributions (Arampatzis et al, TREC 2001) –Combined use of Rocchio and Logistic regression (Yi Zhang, SIGIR 2004) A new task in TDT 2004 –Topics are narrower, and typically short lasting than the TREC topics

11 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 11 Our Experiments 4 methods –Rocchio with a fixed threshold (Roc.fix) –Rocchio with an adaptive threshold using Margin-based Local Regression (Roc.MLR) –Nearest Neighbor (Ralf’s variant) with a fixed threshold (kNN.fix) –Logistic regression (LR) regularized by a complexity penalty 3 corpora –TDT5 corpus, as the evaluation set in TDT 2004 –TDT4 corpus, as a validation set for parameter tuning –TREC11 ( 2002) corpus, as reference set for robustness analysis 2 optimization criteria –Ctrk: TDT standard, equivalent to setting the penalty ratio for miss vs. false alarm to 1270: 1 (approximately) –T11SU: TREC standard, equivalent to the penalty ratio of 2:1

12 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 12 Outline of Our Methods Roc.fix and NN.fix –Non-probabilistic model, generating ad hoc scores for documents with respect to each topic –Fixed global threshold, tuned on a retrospective corpus Roc.MLR –Non-probabilistic model, ad hoc scores –Threshold locally optimized using incomplete relevance judgments for a sliding window of documents LR –Probabilistic modeling of Pr(topic | x) –Fixed global threshold that optimizes the utility

13 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 13 Regularized Logistic Regression The objective is defined as to find the optimal regression coefficients This is equivalent to Maximum A Posteriori (MAP) estimation with prior distribution It predicts the probability of a topic given the data

14 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 14 Roc.fix on TDT3 Corpus RF on 1.6% of documents, 25% Min-cost reduction Base: No RF or PRF PRF: Weighted PRF MLR: Partial RF FRF: Complete RF

15 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 15 Effect of SA vs. PRF: on TDT5 Corpus With Rocchio.fix: SA reduced Ctrk by 54% compared to PRF; With Nearest Neighbors: SA reduced Ctrk by 48%.

16 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 16 SATracking Results on TDT5 Corpus For each team, the best score (with respect to Ctrk or T11SU) of the submitted runs is presented. Ctrk (the lower the better) T11SU (the higher the better)

17 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 17 Relative Performance of Our Methods TREC Utility (T11SU): Penalty of miss vs. f/a = 2:1 TDT Cost (Ctrk): Penalty of miss vs. f/a ~= 1270:1

18 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 18 Main Observations Encouraging results: a small amount of relevance feedback (on 1~2% documents) yielded significant performance improvement Puzzling point: Rocchio without any threshold calibration, works surprisingly well in both Ctrk and T11SU, which is inconsistent to our observations on TREC data. Why? Scaling issue: a significant challenge for the learning algorithms including LR and MLR in the TDT domain.

19 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 19 Temporal Nature of Topics/Events TREC Topic: Elections TDT Event: Nov. APEC Meeting Broadcast News Topic: Kidnappings

20 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 20 Topics for Future Research Keep up with new algorithms/theories Exploit domain knowledge, e.g., predefined topics (and super topics) in a hierarchical setting Investigate topic-conditioned event tracking with predictive features (including Named Entities) Develop algorithms to detect and exploit temporal trends TDT in cross-lingual settings

21 Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 21 References  Y. Yang and B. Kisiel. Margin-based Local Regression for Adaptive Filtering. ACM CIKM 2003 (Conference on Information and Knowledge Management).  J. Zhang and Y. Yang. Robustness of regularized linear classification methods in text categorization ACM SIGIR 2003, pp 190-197.  J. Zhang, R. Jin, Y. Yang and A. Hauptmann. Modified logistic regression: an approximation to SVM and its application in large-scale text categorization. ICML 2003 (International Conference on Machine Learning), pp888- 897.  N. Ma, Y. Yang & M. Rogati. Cross-Language Event Tracking. Asia Information Retrieval Symposium (AIRS), 2004.


Download ppt "Carnegie Mellon School of Computer Science Language Technologies Institute CMU Team-1 in TDT 2004 Workshop 1 CMU TEAM-A in TDT 2004 Topic Tracking Yiming."

Similar presentations


Ads by Google