Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Crawling the Web Discovery and Maintenance of Large-Scale Web Data Junghoo Cho Stanford University.

Similar presentations


Presentation on theme: "1 Crawling the Web Discovery and Maintenance of Large-Scale Web Data Junghoo Cho Stanford University."— Presentation transcript:

1 1 Crawling the Web Discovery and Maintenance of Large-Scale Web Data Junghoo Cho Stanford University

2 2 What is a Crawler? web init get next url get page extract urls initial urls to visit urls visited urls web pages

3 3 Applications Internet Search Engines Internet Search Engines –Google, AltaVista Comparison Shopping Services Comparison Shopping Services –My Simon, BizRate Data mining Data mining –Stanford Web Base, IBM Web Fountain

4 4 Why at University? Not much scientific study Not much scientific study –Little had been known –Freshness problems –Can we cope with growth? Can focus on “fundamental issues” Can focus on “fundamental issues”

5 5 Crawling at Stanford Web Base Project Web Base Project BackRub Crawler, PageRank BackRub Crawler, PageRank Google Google New Web Base Crawler New Web Base Crawler –20,000 lines in C/C++ –130M pages collected

6 6 Crawling Issues (1) Load at visited web sites Load at visited web sites –Space out requests to a site –Limit number of requests to a site per day –Limit depth of crawl

7 7 Crawling Issues (2) Load at crawler Load at crawler –Parallelize init get next url get page extract urls initial urls to visit urls visited urls web pages init get next url get page extract urls ?

8 8 Crawling Issues (3) Scope of crawl Scope of crawl –Not enough space for “all” pages –Not enough time to visit “all” pages Solution: Visit “important” pages visited pages Intel

9 9 Crawling Issues (4) Replication Replication –Pages mirrored at multiple locations

10 10 Crawling Issues (5) Incremental crawling Incremental crawling –How do we avoid crawling from scratch? –How do we keep pages “fresh”?

11 11 Summary of My Research Load on sites [PAWS00] Load on sites [PAWS00] Parallel crawler [Tech Report 01] Parallel crawler [Tech Report 01] Page selection [WWW7] Page selection [WWW7] Replicated page detection [SIGMOD00] Replicated page detection [SIGMOD00] Page freshness [SIGMOD00] Page freshness [SIGMOD00] Crawler architecture [VLDB00] Crawler architecture [VLDB00]

12 12 Outline of This Talk How can we maintain pages fresh? How does the Web change? How does the Web change? What do we mean by “fresh” pages? What do we mean by “fresh” pages? How should we refresh pages? How should we refresh pages?

13 13 Web Evolution Experiment How often does a Web page change? How often does a Web page change? How long does a page stay on the Web? How long does a page stay on the Web? How long does it take for 50% of the Web to change? How long does it take for 50% of the Web to change? How do we model Web changes? How do we model Web changes?

14 14 Experimental Setup February 17 to June 24, 1999 February 17 to June 24, 1999 270 sites visited (with permission) 270 sites visited (with permission) –identified 400 sites with highest “PageRank” –contacted administrators 720,000 pages collected 720,000 pages collected –3,000 pages from each site daily –start at root, visit breadth first (get new & old pages) –ran only 9pm - 6am, 10 seconds between site requests

15 15 Average Change Interval fraction of pages  average change interval 

16 16 Change Interval – By Domain fraction of pages   average change interval

17 17 Modeling Web Evolution Poisson process with rate Poisson process with rate T is time to next event T is time to next event f T (t) = e - t (t > 0) f T (t) = e - t (t > 0)

18 18 Change Interval of Pages for pages that change every 10 days on average interval in days fraction of changes with given interval Poisson model

19 19 Change Metrics Freshness Freshness –Freshness of element e i at time t is F ( e i ; t ) = 1 if e i is up-to-date at time t 0 otherwise eiei eiei... webdatabase Freshness of the database S at time t is F( S ; t ) = F( e i ; t ) (Assume “equal importance” of pages)  N 1 N i=1

20 20 Change Metrics Age Age –Age of element e i at time t is A( e i ; t ) = 0 if e i is up-to-date at time t t - (modification e i time) otherwise eiei eiei... webdatabase Age of the database S at time t is A( S ; t ) = A( e i ; t ) (Assume “equal importance” of pages)  N 1 N i=1

21 21 Change Metrics F(e i ) A(e i ) 0 0 1 time update refresh Time averages:

22 22 Trick Question Two page database Two page database changes daily e 1 changes daily changes once a week e 2 changes once a week Can visit one page per week Can visit one page per week How should we visit pages? How should we visit pages? –... [uniform] –e 1 e 2 e 1 e 2 e 1 e 2 e 1 e 2... [uniform] – … [proportional] –e 1 e 1 e 1 e 1 e 1 e 1 e 1 e 2 e 1 e 1 … [proportional] –... –e 1 e 1 e 1 e 1 e 1 e 1... –... –e 2 e 2 e 2 e 2 e 2 e 2... –? e1e1 e2e2 e1e1 e2e2 web database

23 23 Proportional Often Not Good! Visit fast changing Visit fast changing e 1  get 1/2 day of freshness  get 1/2 day of freshness Visit slow changing Visit slow changing e 2  get 1/2 week of freshness Visiting is a better deal! Visiting e 2 is a better deal!

24 24 Optimal Refresh Frequency Problem Given and f, find find that maximize that maximize

25 25 Solution Compute Compute Lagrange multiplier method Lagrange multiplier method All All

26 26 Optimal Refresh Frequency Shape of curve is the same in all cases Holds for any change frequency distribution

27 27 Optimal Refresh for Age Shape of curve is the same in all cases Holds for any change frequency distribution

28 28 Comparing Policies Based on Statistics from experiment and revisit frequency of every month

29 29 Not Every Page is Equal! In general, e1e1 e2e2 Accessed by users 20 times/day Accessed by users 10 times/day Some pages are “more important” Some pages are “more important”

30 30 Weighted Freshness w = 1 w = 2 f

31 31 Change Frequency Estimation How to estimate change frequency? How to estimate change frequency? –Naïve Estimator: X/T –X: number of detected changes –T: monitoring period –2 changes in 10 days: 0.2 times/day Change detected 1 day Page visited Page changed Incomplete change history Incomplete change history

32 32 Improved Estimator Based on the Poisson model Based on the Poisson model –X: number of detected changes –N: number of accesses –f : access frequency 3 changes in 10 days: 0.36 times/day  Accounts for “missed” changes

33 33 Improved Estimator Bias Bias Efficiency Efficiency Consistency Consistency

34 34 Improvement Significant? Application to a Web crawler Application to a Web crawler –Visit pages once every week for 5 weeks –Estimate change frequency –Adjust revisit frequency based on the estimate »Uniform: do not adjust »Naïve: based on the naïve estimator »Ours: based on our improved estimator

35 35 Improvement from Our Estimator Detected changes Ratio to uniform Uniform2,147,589 100% 100% Naïve4,145,582193% Ours4,892,116228% (9,200,000 visits in total)

36 36 Other Estimators Irregular access interval Irregular access interval Last-modified date Last-modified date Categorization Categorization

37 37 Summary Web evolution experiment Web evolution experiment Change metric Change metric Refresh policy Refresh policy Frequency estimator Frequency estimator

38 38 Contribution Freshness [SIGMOD00] Freshness [SIGMOD00] Page selection [WWW7] Page selection [WWW7] Replicated page detection [SIGMOD00] Replicated page detection [SIGMOD00] Load on sites [PAWS00] Load on sites [PAWS00] Parallel crawler [Tech Report 01] Parallel crawler [Tech Report 01] Crawler architecture [VLDB00] Crawler architecture [VLDB00]

39 39 What’s Next? New search paradigm New search paradigm –What is the middle name of Thomas Edison? Thomas [a-z]+ Edison Continuous data stream Continuous data stream –Web logs, Network traffic engineering –How should the data model change?

40 40 The End Thank you for your attention Thank you for your attention For more information visit For more information visithttp://www-db.stanford.edu/~cho/


Download ppt "1 Crawling the Web Discovery and Maintenance of Large-Scale Web Data Junghoo Cho Stanford University."

Similar presentations


Ads by Google