Information Retrieval and Web Search Web search. Spidering Instructor: Rada Mihalcea Class web page: (some of these.

Slides:



Advertisements
Similar presentations
The Internet and the Web
Advertisements

Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Web Search Spidering.
Principles of IR Hacettepe University Department of Information Management DOK 324: Principles of IR.
Crawling the WEB Representation and Management of Data on the Internet.
1 Web Search Introduction. 2 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet.
1 CS 502: Computing Methods for Digital Libraries Lecture 16 Web search engines.
1 Web Search Introduction. 2 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet.
Searching the Web. The Web Why is it important: –“Free” ubiquitous information resource –Broad coverage of topics and perspectives –Becoming dominant.
Searching and Researching the World Wide: Emphasis on Christian Websites Developed from the book: Searching and Researching on the Internet and World Wide.
Crawling The Web. Motivation By crawling the Web, data is retrieved from the Web and stored in local repositories Most common example: search engines,
WEB CRAWLERs Ms. Poonam Sinai Kenkre.
1 Web Crawling and Data Gathering Spidering. 2 Some Typical Tasks Get information from other parts of an organization –It may be easier to get information.
Search engines fdm 20c introduction to digital media lecture warren sack / film & digital media department / university of california, santa.
Web search basics.
1 Web Developer Foundations: Using XHTML Chapter 11 Web Page Promotion Concepts.
1 Web Developer & Design Foundations with XHTML Chapter 13 Key Concepts.
Wasim Rangoonwala ID# CS-460 Computer Security “Privacy is the claim of individuals, groups or institutions to determine for themselves when,
HOW SEARCH ENGINE WORKS. Aasim Bashir.. What is a Search Engine? Search engine: It is a website dedicated to search other websites and there contents.
Lecturer: Ghadah Aldehim
Web Search Created by Ejaj Ahamed. What is web?  The World Wide Web began in 1989 at the CERN Particle Physics Lab in Switzerland. The Web did not gain.
1 Web Search Introduction. 2 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet.
Chapter 6 The World Wide Web. Web Pages Each page is an interactive multimedia publication It can include: text, graphics, music and videos Pages are.
Crawlers and Spiders The Web Web crawler Indexer Search User Indexes Query Engine 1.
Search Engines. Internet protocol (IP) Two major functions: Addresses that identify hosts, locations and identify destination Connectionless protocol.
XHTML Introductory1 Linking and Publishing Basic Web Pages Chapter 3.
Chapter 7 Web Content Mining Xxxxxx. Introduction Web-content mining techniques are used to discover useful information from content on the web – textual.
Basic Web Applications 2. Search Engine Why we need search ensigns? Why we need search ensigns? –because there are hundreds of millions of pages available.
Crawling Slides adapted from
Web Search. Structure of the Web n The Web is a complex network (graph) of nodes & links that has the appearance of a self-organizing structure  The.
Web Searching Basics Dr. Dania Bilal IS 530 Fall 2009.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
Overview What is a Web search engine History Popular Web search engines How Web search engines work Problems.
SEO  What is it?  Seo is a collection of techniques targeted towards increasing the presence of a website on a search engine.
Web Search Algorithms By Matt Richard and Kyle Krueger.
Retrieving Information on the Web Presented by Md. Zaheed Iftekhar Course : Information Retrieval (IFT6255) Professor : Jian E. Nie DIRO, University of.
Search Engines1 Searching the Web Web is vast. Information is scattered around and changing fast. Anyone can publish on the web. Two issues web users have.
Searching the web Enormous amount of information –In 1994, 100 thousand pages indexed –In 1997, 100 million pages indexed –In June, 2000, 500 million pages.
1 UNIT 13 The World Wide Web Lecturer: Kholood Baselm.
World Wide Web “WWW”, "Web" or "W3". World Wide Web “WWW”, "Web" or "W3"
1 University of Qom Information Retrieval Course Web Search (Spidering) Based on:
Information Retrieval and Web Search Web Crawling Instructor: Rada Mihalcea (some of these slides were adapted from Ray Mooney’s IR course at UT Austin)
The World Wide Web. What is the worldwide web? The content of the worldwide web is held on individual pages which are gathered together to form websites.
Search Engines Information Technology and Social Life March 2, 2005.
Website Design, Development and Maintenance ONLY TAKE DOWN NOTES ON INDICATED SLIDES.
1 Web Search Introduction. 2 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet.
Web Crawling and Automatic Discovery Donna Bergmark March 14, 2002.
1 Crawling Slides adapted from – Information Retrieval and Web Search, Stanford University, Christopher Manning and Prabhakar Raghavan.
1 CS 430: Information Discovery Lecture 17 Web Crawlers.
Week-6 (Lecture-1) Publishing and Browsing the Web: Publishing: 1. upload the following items on the web Google documents Spreadsheets Presentations drawings.
1 UNIT 13 The World Wide Web. Introduction 2 Agenda The World Wide Web Search Engines Video Streaming 3.
1 UNIT 13 The World Wide Web. Introduction 2 The World Wide Web: ▫ Commonly referred to as WWW or the Web. ▫ Is a service on the Internet. It consists.
General Architecture of Retrieval Systems 1Adrienn Skrop.
Search Engine and Optimization 1. Introduction to Web Search Engines 2.
1 Web Search Spidering (Crawling)
1 Web Crawling and Data Gathering Spidering. 2 Some Typical Tasks Get information from other parts of an organization –It may be easier to get information.
(Big) data accessing Prof. Wenwen Li School of Geographical Sciences and Urban Planning 5644 Coor Hall
Design and Implementation of a High- Performance Distributed Web Crawler Vladislav Shkapenyuk, Torsten Suel 실시간 연구실 문인철
Dr. Frank McCown Comp 250 – Web Development Harding University
SEARCH ENGINES & WEB CRAWLER Akshay Ghadge Roll No: 107.
Data Mining Chapter 6 Search Engines
Web Search Introduction.
10. IR on the World Wide Web and Link Analysis
Web Search Introduction.
12. Web Spidering These notes are based, in part, on notes by Dr. Raymond J. Mooney at the University of Texas at Austin.
Web Search by Ray Mooney
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Web Searching Everything, now..
Web Search Introduction.
Information Retrieval and Web Search
Presentation transcript:

Information Retrieval and Web Search Web search. Spidering Instructor: Rada Mihalcea Class web page: (some of these slides were adapted from Ray Mooney’s IR course at UT Austin)

Slide 1 Today’s Topics Information Retrieval applied on the Web The Web – the largest collection of documents available today –Still, a collection –Should be able to apply “traditional” IR techniques, with few changes Web Search Spidering

Slide 2 The World Wide Web Developed by Tim Berners-Lee in 1990 at CERN to organize research documents available on the Internet. Combined idea of documents available by FTP with the idea of hypertext to link documents. Developed initial HTTP network protocol, URLs, HTML, and first “web server.”

Slide 3 Web Pre-History Ted Nelson developed idea of hypertext in Doug Engelbart invented the mouse and built the first implementation of hypertext in the late 1960’s at SRI. ARPANET was developed in the early 1970’s. –For what main reason? The basic technology was in place in the 1970’s; but it took the PC revolution and widespread networking to inspire the web and make it practical.

Slide 4 Web Browser History Early browsers were developed in 1992 (Erwise, ViolaWWW). In 1993, Marc Andreessen and Eric Bina at UIUC NCSA developed the Mosaic browser and distributed it widely. Andreessen joined with James Clark (Stanford Prof. and Silicon Graphics founder) to form Mosaic Communications Inc. in 1994 (which became Netscape to avoid conflict with UIUC). Microsoft licensed the original Mosaic from UIUC and used it to build Internet Explorer in 1995.

Slide 5 Search Engine Early History By late 1980’s many files were available by anonymous FTP. In 1990, Alan Emtage of McGill Univ. developed Archie (short for “archives”) – Assembled lists of files available on many FTP servers. –Allowed regex search of these file names. In 1993, Veronica and Jughead were developed to search names of text files available through Gopher servers.

Slide 6 Web Search History In 1993, early web robots (spiders) were built to collect URL’s: –Wanderer –ALIWEB (Archie-Like Index of the WEB) –WWW Worm (indexed URL’s and titles for regex search) In 1994, Stanford grad students David Filo and Jerry Yang started manually collecting popular web sites into a topical hierarchy called Yahoo.

Slide 7 Web Search History (cont) In early 1994, Brian Pinkerton developed WebCrawler as a class project at U Wash. (eventually became part of Excite and AOL). A few months later, Fuzzy Maudlin, a grad student at CMU developed Lycos. First to use a standard IR system as developed for the DARPA Tipster project. First to index a large set of pages. In late 1995, DEC developed Altavista. Used a large farm of Alpha machines to quickly process large numbers of queries. Supported boolean operators, phrases.

Slide 8 Web Search Recent History In 1998, Larry Page and Sergey Brin, Ph.D. students at Stanford, started Google. Main advance is use of link analysis to rank results partially based on authority. Conclusions: –Most popular search engines were developed by graduate students –Don’t wait too long to develop your own search engine!

Slide 9 Web Challenges for IR Distributed Data: Documents spread over millions of different web servers. Volatile Data: Many documents change or disappear rapidly (e.g. dead links). Large Volume: Billions of separate documents. Unstructured and Redundant Data: No uniform structure, HTML errors, up to 30% (near) duplicate documents. Quality of Data: No editorial control, false information, poor quality writing, typos, etc. Heterogeneous Data: Multiple media types (images, video, VRML), languages, character sets, etc.

Slide 10 Number of Web Servers

Slide 11 Number of Web Pages

Slide 12 Number of Web Pages Indexed Assuming about 20KB per page, 1 billion pages is about 20 terabytes of data. SearchEngineWatch, Aug. 15, 2001

Slide 13 Growth of Web Pages Indexed GoogleGoogle lists current number of pages searched. SearchEngineWatch, Aug. 15, 2001

Slide 14 Zipf’s Law on the Web Length of web pages has a Zipfian distribution. Number of hits to a web page has a Zipfian distribution.

Slide 15 Manual Hierarchical Web Taxonomies Yahoo approach of using human editors to assemble a large hierarchically structured directory of web pages. – Open Directory Project is a similar approach based on the distributed labor of volunteer editors (“net-citizens provide the collective brain”). Used by most other search engines. Started by Netscape. –

Slide 16 Web Search Using IR Query String IR System Ranked Documents 1. Page1 2. Page2 3. Page3. Document corpus Web Spider the spider represents the main difference compared to traditional IR

Slide 17 Spiders (Robots/Bots/Crawlers) Start with a comprehensive set of root URL’s from which to start the search. Follow all links on these pages recursively to find additional pages. Index/Process all novel found pages in an inverted index as they are encountered. May allow users to directly submit pages to be indexed (and crawled from). –You’ll need to build a simple spider for Assignment 4 to traverse the UNT webpages.

Slide 18 Search Strategies Breadth-first Search

Slide 19 Search Strategies (cont) Depth-first Search

Slide 20 Search Strategy Trade-Off’s Breadth-first explores uniformly outward from the root page but requires memory of all nodes on the previous level (exponential in depth). Standard spidering method. Depth-first requires memory of only depth times branching-factor (linear in depth) but gets “lost” pursuing a single thread. Both strategies can be easily implemented using a queue of links (URL’s).

Slide 21 Avoiding Page Duplication Must detect when revisiting a page that has already been spidered (web is a graph not a tree). Must efficiently index visited pages to allow rapid recognition test. –Tree indexing (e.g. trie) –Hashtable Index page using URL as a key. –Must canonicalize URL’s (e.g. delete ending “/”) –Not detect duplicated or mirrored pages. Index page using textual content as a key. –Requires first downloading page.

Slide 22 Spidering Algorithm Initialize queue (Q) with initial set of known URL’s. Until Q empty or page or time limit exhausted: Pop URL, L, from front of Q. If L is not to an HTML page (.gif,.jpeg,.ps,.pdf,.ppt…) continue loop. If already visited L, continue loop. Download page, P, for L. If cannot download P (e.g. 404 error, robot excluded) continue loop. Index P (e.g. add to inverted index or store cached copy). Parse P to obtain list of new links N. Append N to the end of Q.

Slide 23 Queueing Strategy How new links are added to the queue determines search strategy. FIFO (append to end of Q) gives breadth-first search. LIFO (add to front of Q) gives depth-first search. Heuristically ordering the Q gives a “focused crawler” that directs its search towards “interesting” pages.

Slide 24 Restricting Spidering You can restrict spider to a particular site. –Remove links to other sites from Q. –In assignment 4 - Make sure you restrict your spider to UNT domain !! If a new link does not contain the unt.edu marker, do not follow! You can restrict spider to a particular directory. –Remove links not in the specified directory. Obey page-owner restrictions (robot exclusion).

Slide 25 Link Extraction Must find all links in a page and extract URLs. – Must complete relative URL’s using current page URL: – to – to

Slide 26 URL Syntax A URL has the following syntax: – :// ? # A query passes variable values from an HTML form and has the syntax: – = & = … A fragment is also called a reference or a ref and is a pointer within the document to a point specified by an anchor tag of the form: – ”>

Slide 27 Link Canonicalization Equivalent variations of ending directory normalized by removing ending slash. – – Internal page fragments (ref’s) removed: – –

Slide 28 Anchor Text Indexing Extract anchor text (between and ) of each link followed. Anchor text is usually descriptive of the document to which it points. Add anchor text to the content of the destination page to provide additional relevant keyword indices. Used by Google: – Evil Empire – IBM

Slide 29 Anchor Text Indexing (cont’d) Helps when descriptive text in destination page is embedded in image logos rather than in accessible text. Many times anchor text is not useful: –“click here” Increases content more for popular pages with many in- coming links, increasing recall of these pages. May even give higher weights to tokens from anchor text.

Slide 30 Robot Exclusion Web sites and pages can specify that robots should not crawl/index certain areas. Two components: –Robots Exclusion Protocol: Site wide specification of excluded directories. –Robots META Tag: Individual document tag to exclude indexing or following links.

Slide 31 Robots Exclusion Protocol Site administrator puts a “robots.txt” file at the root of the host’s web directory. – – File is a list of excluded directories for a given robot (user-agent). –Exclude all robots from the entire site: User-agent: * Disallow: /

Slide 32 Robot Exclusion Protocol Examples Exclude specific directories: User-agent: * Disallow: /tmp/ Disallow: /cgi-bin/ Disallow: /users/paranoid/ Exclude a specific robot: User-agent: GoogleBot Disallow: / Allow a specific robot: User-agent: GoogleBot Disallow:

Slide 33 Robot Exclusion Protocol Details Only use blank lines to separate different User-agent disallowed directories. One directory per “Disallow” line. No regex patterns in directories.

Slide 34 Robots META Tag Include META tag in HEAD section of a specific HTML document. – Content value is a pair of values for two aspects: –index | noindex: Allow/disallow indexing of this page. –follow | nofollow: Allow/disallow following links on this page.

Slide 35 Robots META Tag (cont) Special values: –all = index,follow –none = noindex,nofollow Examples:

Slide 36 Robot Exclusion Issues META tag is newer and less well-adopted than “robots.txt”. Standards are conventions to be followed by “good robots.” Companies have been prosecuted for “disobeying” these conventions and “trespassing” on private cyberspace.

Slide 37 Multi-Threaded Spidering Bottleneck is network delay in downloading individual pages. Best to have multiple threads running in parallel each requesting a page from a different host. Distribute URL’s to threads to guarantee equitable distribution of requests across different hosts to maximize through-put and avoid overloading any single server. Early Google spider had multiple co-ordinated crawlers with about 300 threads each, together able to download over 100 pages per second.

Slide 38 Directed/Focused Spidering Sort queue to explore more “interesting” pages first. Two styles of focus: –Topic-Directed –Link-Directed

Slide 39 Topic-Directed Spidering Assume desired topic description or sample pages of interest are given. Sort queue of links by the similarity (e.g. cosine metric) of their source pages and/or anchor text to this topic description. –Related to Topic Tracking and Detection

Slide 40 Link-Directed Spidering Monitor links and keep track of in-degree and out- degree of each page encountered. Sort queue to prefer popular pages with many in-coming links (authorities). Sort queue to prefer summary pages with many out- going links (hubs). –Google’s PageRank algorithm

Slide 41 Keeping Spidered Pages Up to Date Web is very dynamic: many new pages, updated pages, deleted pages, etc. Periodically check spidered pages for updates and deletions: –Just look at header info (e.g. META tags on last update) to determine if page has changed, only reload entire page if needed. Track how often each page is updated and preferentially return to pages which are historically more dynamic. Preferentially update pages that are accessed more often to optimize freshness of more popular pages.