Presentation is loading. Please wait.

Presentation is loading. Please wait.

Web-based Information Architectures Jian Zhang. Today’s Topics Term Weighting Scheme Vector Space Model & GVSM Evaluation of IR Rocchio Feedback Web Spider.

Similar presentations


Presentation on theme: "Web-based Information Architectures Jian Zhang. Today’s Topics Term Weighting Scheme Vector Space Model & GVSM Evaluation of IR Rocchio Feedback Web Spider."— Presentation transcript:

1 Web-based Information Architectures Jian Zhang

2 Today’s Topics Term Weighting Scheme Vector Space Model & GVSM Evaluation of IR Rocchio Feedback Web Spider Algorithm Text Mining: Named Entity Identification Data Mining Text Categorization (kNN)

3 Term Weighting Scheme TW = TF * IDF –TF part = f 1 (tf(term, doc)) –IDF part = f 2 (idf(term)) = f 2 (N/df(term)) –E.g., f 1 (tf) = normalized_tf = tf/max_tf; f 2 (idf) = log 2 (idf) –E.g, f 1 (tf) = tf; f 2 (idf) = 1 NOTE: definition of DF!

4 Document & Query Representation Bag of words, Vector Space Model(VSM) Word Normalization –Stopwords removal –Stemming Proximity phrases Each element of the vector is the Term Weight of that term w.r.t the document/query.

5 Similarity Measure Dot Product:

6 Similarity Measure Cosine Similarity:

7 Information Retrieval Basic assumption: Shared words between query and document Similarity measures –Dot product –Cosine similarity (normalized)

8 Evaluation Recall = a/(a+c) Precision = a/(a+b) F1=2.0*recall*precision / (recall+precision) Accuracy – Bad for IR,

9 Refinement of VSM Query expansion Relevance Feedback –Rocchio Formula: … Alpha, beta, gamma and their meanings

10 Generalized Vector Space Model Given a collection of training data, present each term as a n-dimensional vector D1D1 D2D2 …DjDj …DnDn T1T1 w 11 w 12 …w 1j …w 1n T2T2 w 21 w 22 …w 2j …w 2n ………………… TiTi w i1 w i2 …w ij …w in ………………… TmTm w m1 w m2 …w mj …w mn

11 GVSM (2) Define similarity between term t i and t j Sim(t i, t j ) = cos(t i, t j ) Similarity between qury and document is based on the term-term similarity –For each query term q i, find the term t D in the document D that is most similar to q i. This value v iD, can be considered as the similarity between a sigle word query q i and the document D. –Sum up the similarities between each query term and the document D. This is considered the similarity between the query and the document D.

12 GVSM (3) Sim(Q,D) = Σ i [Max j (sim(q i, d j )] or normalizing for document & query length: Sim norm (Q, D) =

13 Maximal Marginal Relevance Redundancy reduction Getting more novel things Formula MMR(Q, C, R) = Argmax k d i in C [λS(Q, d i ) - (1-λ)max d j in R (S(d i, d j ))]

14 MMR Example (Summarization) S1 S2 S3 S4 S5 S6 S1 S3 S4 Full Text Summary Query

15 MMR Example (Summarization) Select first sentence: λ=0.7 S1 S2 S3 S4 S5 S6 S3 Full Text Summary Query 0.4 0.3 0.6 0.2 0.3 Sim(Q, S) = Q. S / (|Q||S|)

16 MMR Example (Summarization) Select second sentence S1 S2 S3 S4 S5 S6 S3 Full Text Summary Query 0.15 0.1 0.2 0.5 S3 S1

17 S4 S1 S2 S3 S4 S5 S6 S1 Full Text Summary Query 0.2 0.1 0.4 0.6 S3 S1 MMR Example (Summarization) Select third sentence

18 Text Categorization Task You want to classify a document to some categories automatically. For example, the categories are "weather" and "sport". To do that, you can use kNN algorithm. To use kNN, you need a collection of documents, each of them is labeled to some categories by human.

19 Text Categorization Procedure Using VSM represent each document in the training data Using VSM represent the document to be categorized (new document). Use cosine (or some other measures, but cosine is good here, why) find top k documents (k nearest neighbors ) in the training data that are similar to the new document. Decide from the k nearest neighbors what are the categories for the new document

20 Web Spider The web graph at any instant of time contains k-connected subgraphs The spider algorithm given in class is a depth first search through a web subgraph Avoiding respidering the same page Completeness is not guaranteed. Partial solution is to get seed URLs as diverse as possible.

21 Web Spider PROCEDURE SPIDER 4 (G, {SEEDS}) Initialize COLLECTION Initialize VISITED For every ROOT in SEEDS Initialize STACK Let STACK := push(ROOT, STACK) While STACK is not empty, Do URL curr := pop(STACK) Until URL curr is not in VISITED insert-hash(URL curr, VISITED) PAGE := look-up(URL curr ) STORE(, COLLECTION) For every URL i in PAGE, push(URL i, STACK) Return COLLECTION

22 Text Mining Components of Text Mining Categorization by topic or Genre Fact extraction from text Data Mining from DBs or extracted facts

23 Fact extraction from text Named Entity Identification FSA/FST, HMM Role-Situated Named Entities Apply context information Information Extraction Template matching

24 Named Entity Identification Definition of A Finite State Acceptor (FSA) With an input source (e.g. string of words) Outputs "YES" or "NO" Definition of A Finite State Transducer (FST) An FSA with variable binding Outputs "NO" or "YES"+variable-bindings Variable bindings encode recognized entity e.g. "YES "

25 Named Entity Identification Example. Identify numbers: 1, 2.0, -3.22, +3e2, 4e-5 D = {0,1,2,3,4,5,6,7,8,9} +- D D e. D e D D D D D Start

26 Data Mining Learning by caching –What/when to cache –When to use/invalidate/update cache Learning from Examples (a.k.a, "Supervised" learning) –Labeled examples for training –Learn the mapping from examples to labels –E.g.: Naive Bayes, Decision Trees,... –Text Categorization (using kNN or other means) is a learning-from-examples task

27 Data Mining "Speedup" Learning –Tuning search heuristics from experience –Inducing explicit control knowledge –Analogical learning (generalized instances) Optimization "policy" learning –Predicting continuous objective function –E.g. Regression, Reinforcement,... New Pattern Discovery (aka "Unsupervised" Learning) –Finding meaningful correlations in data –E.g. association rules, clustering,...

28 Generalize v.s. Specialize Generalize: First, each record in your database is a RULE Then, generalize (how?, when to stop?) Specialize: First, give a very general rule (almost useless) Then, specialize (how? When to stop?)

29 Methods for Supervised DM Classifiers Linear Separators (regression) Naive Bayes (NB) Decision Trees (DTs) k-Nearest Neighbor (kNN) Decision rule induction Support Vector Machines (SVMs) Neural Networks (NNs)...


Download ppt "Web-based Information Architectures Jian Zhang. Today’s Topics Term Weighting Scheme Vector Space Model & GVSM Evaluation of IR Rocchio Feedback Web Spider."

Similar presentations


Ads by Google