Presentation is loading. Please wait.

Presentation is loading. Please wait.

Large Scale Multi-Label Classification via MetaLabeler Lei Tang Arizona State University Suju Rajan and Vijay K. Narayanan Yahoo! Data Mining.

Similar presentations


Presentation on theme: "Large Scale Multi-Label Classification via MetaLabeler Lei Tang Arizona State University Suju Rajan and Vijay K. Narayanan Yahoo! Data Mining."— Presentation transcript:

1 Large Scale Multi-Label Classification via MetaLabeler Lei Tang Arizona State University Suju Rajan and Vijay K. Narayanan Yahoo! Data Mining & Research

2 Large Scale Multi-Label Classification
Huge number of instances and categories Common for online contents Query Categorization Web Page Classification Social Bookmark/Tag Recommendation Video Annotation/Organization

3 Challenges Most existing multi-label methods do not scale
Multi-Class: thousands of categories Multi-Label: each instance has >1 labels Large Scale: huge number of instances and categories Our query categorization problem: 1.5M queries, 7K categories Yahoo! Directory 792K docs, 246K categories in Liu et al. 05 Most existing multi-label methods do not scale structural SVM, mixture model, collective inference, maximum-entropy model, etc. The simplest One-vs-Rest SVM is still widely used

4 One-vs-Rest SVM x1 + x2 x3 - x4 x1 - x2 + x3 x4 x1 + x2 - x3 x4 x1 -
C1, C3 x2 C1, C2, C4 x3 C2 x4 C2, C4 x1 + x2 x3 - x4 x1 - x2 + x3 x4 x1 + x2 - x3 x4 x1 - x2 + x3 x4 C1 C2 C3 C4 SVM1 SVM2 SVM3 SVM4 Predict C3 C4 C1 C2

5 One-vs-Rest SVM Pros: Cons:
Simple, Fast, Scalable Each label trained independently, easy to parallel Cons: Highly skewed class distribution (few +, many -) Biased prediction scores Output reasonable good ranking (Rifkin and Klauta 04) e.g. 4 categories C1, C2, C3, C4 True Labels for x1: C1, C3 Prediction Scores: {s1, s3} > {s2, s4} Predict the number of labels?

6 MetaLabeler Algorithm
Obtain a ranking of class membership for each instance Any genetic ranking algorithm can be applied Use One-vs-Rest SVM Build a Meta Model to predict the number of top classes Construct Meta Label Construct Meta Feature Build Meta Model

7 How to handle predictions like 2.5 labels?
Meta Model – Training Clothing Q1 = affordable cocktail dress Labels: Formal wear Women Clothing Leather clothing Women Clothing Formal wear Children Clothing How to handle predictions like 2.5 labels? Fashion Meta data Query: #labels Q2 = cotton children jeans Labels: Children clothing Q1: 2 Q2: 1 Q3: 3 Regression Meta-Model One-vs-Rest SVM Q3 = leather fashion in 1990s Labels: Fashion Women Clothing Leather Clothing

8 Meta Feature Construction
Content-Based Use raw data Raw data contains all the info Score-Based Use prediction scores Bias with scores might be learned Rank-Based Use sorted prediction scores C1 C2 C3 C4 Meta Feature 0.9 -0.2 0.7 - 0.6 C1 C2 C3 C4 0.9 -0.2 0.7 -0.6 Meta Feature 0.9 0.7 -0.2 -0.6

9 MetaLabeler Prediction
Given one instance: Obtain the rankings for all labels; Use the meta model to predict the number of labels Pick the top-ranking labels MetaLabeler Easy to implement Use existing SVM package/software directly Can be combined with a hierarchical structure easily Simply build a Meta Model at each internal node

10 Baseline Methods Existing thresholding methods (Yang 2001)
Rank-based Cut (Rcut) output fixed number of top-ranking labels for each instance Proportion-based Cut For each label, choose a portion of test instances as positive Not applicable for online prediction Score-based Cut (Scut, aka. threshold tuning) For each label, determine a threshold based on cross-validation Tends to overfit and is not very stable MetaLabeler: A local RCut method Customize the number of labels for each instance

11 Publicly Available Benchmark Data
Yahoo! Web Page Classification 11 data sets: each constructed from a top-level category 2nd level topics are the categories 16-32k instances, 6-15k features, categories labels per instance, maximum 17 labels Each label has at least 100 instances RCV1: A large scale text corpus 101 categories, 3.2 labels per instance For evaluation purpose, use 3000 for training, 3000 for testing Highly skewed distribution (some labels have only 3-4 instances)

12 MetaLabeler of Different Meta Features
Which type of meta feature is more predictive? Content-based MetaLabeler outperforms other meta features Yahoo! Performance is averaged over 11 data sets. Please refer to the paper for details. RCV1 is averaged over 5-fold cross-validation.

13 Performance Comparison
MetaLabeler tends to outperform other methods

14 Bias with MetaLabeler The distribution of number of labels is imbalanced Most instances have small number of labels; Small portion of data instances have many more labels Imbalanced Distribution leads to bias in MetaLabeler Prefer to predict lesser labels Only predict many labels with strong confidence Society Data is chosen as it has the largest number of categories (23 categories)

15 Scalability Study Each curve shows the total computation time. This is a little bit different from the figure on the paper, in which only additional time for MetaLabeler and Threshold Tuning are shown. Threshold tuning requires cross-validation, otherwise overfit MetaLabeler simply adds some meta labels and learn One-vs-Rest SVMs

16 Scalability Study (cond.)
Threshold tuning: linearly increasing with number of categories in the data E.g categories -> 6000 thresholds to be tuned MetaLabeler: upper bounded by the maximum number of labels with one instance E.g categories but one instance has at most 15 labels Just need to learn additional 15 binary SVMs Meta Model is “independent” of number of categories

17 Application to Large Scale Query Categorization
Query categorization problem: 1.5 million unique queries: 1M for training, 0.5M for testing 120k features A 8-level taxonomy of 6433 categories Multiple labels e.g. 0% interest credit card no transfer fee Financial Services/Credit, Loans and Debt/Credit/Credit Card/ Balance Transfer Financial Services/Credit, Loans and Debt/Credit/Credit Card/ Low Interest Card Financial Services/Credit, Loans and Debt/Credit/Credit Card/ Low-No-fee Card 1.23 labels on average At most 26 labels

18 Flat Model Flat Model: do not leverage the hierarchical structure
Threshold tuning on training data alone takes 40 hours to finish while MetaLabeler costs 2 hours. Here, threshold is tuned on training data only, no cross validation as some categories have few instances. A similar pattern is observed for Macro-F1.

19 Hierarchical Model - Training
Root Step 1: Generate Training Data Step 2: Roll up labels Step 3: Create “Other” Category N Step 4: Train One vs. Rest SVM Other Training Data New Training Data

20 Hierarchical Model - Prediction
Root Query q Predict using SVMs trained at root level m1 m4 m2 m3 Query q c1 m2 c2 Stop !!! Query q m3 c3 Other Stop !!! Stop if reaching a leaf node or “other” category

21 Hierarchical Model + MetaLabeler
Precision decrease by 1-2%, but recall is improved by 10% at deeper levels.

22 Features in MetaLabeler
Related Categories Overstock.com Mass Merchants/…/discount department stores Apparel & Jewelry Electronics & Appliances Home & Garden Books-Movies-Music-Tickets Blizard Toys & Hobbies/…/Video Game Computing/…/Computer Game Software Entertainment & Social Event/…/Fast Food Restaurant Reference/News/Weather Information Threading Books-Movies-Music-Tickets/…/Computing Books Computing/…/Programming Health and Beauty/…/Unwanted Hair Toys and Hobbies/…/Sewing

23 Conclusions & Future Work
MetaLabeler is promising for large-scale multi-label classification Core idea: learn a meta model to predict the number of labels Simple, efficient and scalable Use existing SVM software directly Easy for practical deployment Future work How to optimize MetaLabeler for desired performance ? E.g. > 95% precision Application to social networking related tasks

24 Questions?

25 References Liu, T., Yang, Y., Wan, H., Zeng, H., Chen, Z., and Ma, W Support vector machines classification with a very large-scale taxonomy. SIGKDD Explor. Newsl. 7, 1 (Jun. 2005), Rifkin, R. and Klautau, A In Defense of One-Vs-All Classification. J. Mach. Learn. Res. 5 (Dec. 2004), Yang, Y A study of thresholding strategies for text categorization. In Proceedings of the 24th Annual international ACM SIGIR Conference on Research and Development in information Retrieval (New Orleans, Louisiana, United States). SIGIR '01. ACM, New York, NY,

26 Hierarchical vs. Flat Model
Build a one-vs-rest SVM for all the labels No taxonomy information during training. Hierarchical model has about 5% higher recall fat deeper levels.


Download ppt "Large Scale Multi-Label Classification via MetaLabeler Lei Tang Arizona State University Suju Rajan and Vijay K. Narayanan Yahoo! Data Mining."

Similar presentations


Ads by Google