Presentation is loading. Please wait.

Presentation is loading. Please wait.

SEEDEEP: A System for Exploring and Querying Deep Web Data Sources Gagan Agrawal Fan Wang, Tantan Liu Ohio State University.

Similar presentations


Presentation on theme: "SEEDEEP: A System for Exploring and Querying Deep Web Data Sources Gagan Agrawal Fan Wang, Tantan Liu Ohio State University."— Presentation transcript:

1 SEEDEEP: A System for Exploring and Querying Deep Web Data Sources Gagan Agrawal Fan Wang, Tantan Liu Ohio State University

2 The Deep Web The definition of “the deep web” from Wikipedia The deep Web refers to World Wide Web content that is not part of the surface web, which is indexed by standard search engines.

3 The Deep Web is Huge 500 times larger than the surface web 7500 terabytes of information (19 terabytes in the surface web) 550 billion documents (1 billion in the surface web) More than 200,000 deep web sites Relevant to every domain: scientific, e-commerce, market

4 The Deep Web is Informative Deeper content than surface web  Surface web: text format  Deep web: specific and relational information More than half of the deep web content in topic-specific databases  Biology, Chemistry, Medical, Travel, Business, Academia, and many more… 95 percent of the deep web is publicly accessible

5 Hard to Use the Deep Web Challenges for Integration  Self-maintained and created  Heterogeneous and hidden metadata  Dynamically updated metadata Challenges for Searching  Standard input format  Data redundancy and data source ranking  Data source dependency Challenges for Performance  Network latency and caching mechanism  Fault tolerance issue

6 Motivating Example (1) Biologists have identified the gene X and protein Y are contributors of a disease. They want to examine the SNPs (Single Nucleotide Polymorphisms) located in the genes that share the same functions as either X or Y. Particularly, for all SNPs located in each such gene functions similar to either X or Y, and those have a heterozygosity value greater than 0.01, biologists want to know the maximal SNP frequency in the Asian population.

7 Motivating Example (2) The gene has the same functions as XThe gene has the same functions as YThe frequency information of the SNPs located in these genes and filtered by heterozygosity values

8 Motivating Example (3) How do you know NCBI Gene could provide gene function information given the gene name? Do NCBI Gene and GO data source both use “function” to represent the meaning of “gene function”? Three data sources, dbSNP, Alfred, and Seattle, could provide SNP frequency data, why do you choose dbSNP? I cannot filter SNP by heterozygosity values on dbSNPA path clearly guides the search What if SNP500Cancer data source is unavailable?

9 Our Contribution: SEEDEEP System Discover data source metadataDiscover data source inter-dependency Generate query plans for search Query caching mechanismFault Tolerance mechanism

10 Outline Introduction and Motivation System Core  Query planning problem  Query planning algorithms Other system components  Query caching  Fault tolerance  Schema mining Other Issues

11 What queries does our system support? They want to examine the SNPs located in the genes that share the same functions as either X or Y. Particularly, for all SNPs located in each such gene functions similar to either X or Y, and those have a heterozygosity value greater than 0.01, biologists want to know the maximal SNP frequency in the Asian population. Selection-Projection-Join (SPJ) queries Aggregation-Groupby queries Nested queries: Condition and Entity

12 Data Source Model (1): Single Data Source Each data source is a virtual relational table Virtual relational data elements  MI: must fill-in input attributes  OI: optional fill-in input attributes  O: output attributes  C: inherent data source constraints

13 Data Source Model (2): Correlated Sources Hyper-graph dependency model  Multi-source dependency Dependency relations for data sources D1 and D2  Type 1: D1 provides must fill-in inputs for D2  Type 2: D1 provides optional fill-in inputs for D2

14 Planning Algorithm Overview Tree representation of user query 1.Each node represents a simple query 2. A divide-and-conquer approach 3. A final combination step generates the final query plan Query Types: 1. Aggregation query 2. Nested entity sub-query 3. Ordinary query

15 Query Planning Problem for Ordinary Query Ordinary query format  Entity keywords, attribute keywords, comparison predicates  Standard select-project-join SQL query style Formulation  Sub-graph set cover problem, NP-hard Starting data source Target data source

16 Bidirectional Query Planning Algorithm (1) Heuristic algorithm based on the algorithm introduced by Kacholia et al. Algorithm overview  Starting nodes  Target nodes  Bidirectional graph traversal

17 Bidirectional Query Planning Algorithm (2) How to find minimal sub-graph  Find the shortest paths from starting nodes to target nodes  Dijkstra’s shortest path algorithm Benefit function  Data source coverage  Data source data quality, ontology based  User constraints matching

18 Query Planning Problem for Aggregation Query Node connection property The aggregation data source(s) must be directly or indirectly connected with the grouping data source. Formulation  Sub-graph set cover problem with node connection property constraint  NP-hard

19 Center-spread Query Planning Algorithm (1) Algorithm initialization  Starting nodes  Target nodes  Center nodes: aggregation data source nodes Algorithm overview  Graph traversal starts from the center nodes  Gradually add center nodes’ neighbors adhering to node connection property

20 Center-spread Query Planning Algorithm (2) Grouping data source

21 Query Planning Problem for Nested Entity Query(1) “SNP_Frequency,Gene {Function, X}” Find the genes which have the same functions as X Find the entities specified by b that have the same value on attribute a as the entities that are specified by e 1,…,e k {Gene, Function, X}

22 Query Planning Problem for Nested Entity Query(2) Node linking property The linking data source, which is the data source covering keyword a, must be topologically before the data source covering the entity keyword b “Gene {Function, Protein X}” bae Formulation  Sub-graph set cover problem with node linking property constraint  NP-hard

23 Plan Combination Ending nodes Receiving nodes

24 Plan Merging Query plans for sub-queries can be similar Reduce the network transmission cost of a query plan Two edges and can be merged if the used input and output of paired data sources is the same Mergeable edges weights Optimal Merging  Compatibility graph CG  Maximal node weighted clique in CG  Modified reactive local search (tabu search) algorithm

25 Query Execution Optimization: Pipelined Aggregation Performing aggregation in a pipelined manner Reduce transmission cost by early pruning Grouping-first query plans

26 Query Execution Optimization: Moving Partial Grouping Forward Aggregation-first query plans Conditions  Aggregation data source AD covers a term pga  1 to 1 relation between the entity specified by pga and the entity specifed by the grouping attribute  N to 1 relation between the entity specified by the aggregation attribute and the entity specified by pga

27 Query Planning Evaluation (1) Cost model evaluation: query plan size

28 Query Planning Evaluation (2) Planning Algorithm Scalability  0.03% query planning overhead

29 Query Planning Evaluation (3) Optimization techniques  NO: No optimization technique used  Merging: Only perform plan merging  Grouping: Only perform two grouping optimizations  M+G: Perform both merging and grouping

30 Outline Introduction and Motivation System Core  Query planning problem  Query planning algorithms Other system components  Query caching  Fault tolerance  Schema mining Proposed work

31 Query Caching: Motivation High response time for deep web queries Motivating observations  Data source redundancy  Data sources return answers in a All-In-One fashion  Users issue similar queries in one session Query-Plan-Driven query caching method  Not only cache previous data, also query plans  Caching query plans increases the possibility of data reuse

32 Query Caching: Strategy Overview We are given a list of n previous issued queries, each of which has a query plan P i Given a new query q, we want to generate a query plan for q in the following way  Define a reusability metric to identify the previous query plans that is beneficial to reuse  Select a set of reusable previous queries and query plans  Use a selection function to obtain the sub-query plans we will like to reuse  Use a modified query planning algorithm to generate query plan for the new query based on reusable plan templates

33 Query Caching: Evaluation Three mechanisms compared  NC: No Caching  DDC: Data Driven Caching  PDC: Plan Driven Caching

34 Fault Tolerance: Motivation Remote data sources are vulnerable to unavailability or inaccessibility Data redundancy across multiple data sources, partial redundancy Use similar data sources to hide unavailable or inaccessible data sources Data redundancy based incremental query processing  Not generate new plan from scratch  Inaccessible part is suspended  Incrementally generate a new part to replace the inaccessible part

35 Fault Tolerance: Strategy Overview System Model: data redundancy graph model  Nodes: data sources  Edges: redundancy usage between data source pair Given a query plan P and a set of unavailable data sources UDS, find the minimal impacted sub-plan MISubP  Impacted sub-plan: the sub-plan of the original plan P which is rooted at unavailable data sources UDS  Minimal impacted sub-plan: an impacted sub-plan with no usable data sources Generate the maximal fixable sub-query of the minimal impacted sub-plan  Maximal fixable sub-query doesn’t contain any dead attributes which are covered by the minimal impacted sub-plan Generate a query plan for the maximal fixable sub-query as the new partial query plan

36 Fault Tolerance: Evaluation Query plan execution time  Generate new plan from scratch  Our incremental query processing strategy

37 Schema Mining: Motivation Data source metadata reveals data source coverage information Metadata: input and output attributes Data sources only return a partial set of output attributes in response to a query  the ones have non-NULL values for the input Find approximate complete output attribute set

38 Schema Mining: Strategy Overview Sampling based method  A modest sized sample could discover most deep web data source output schema  Rejection sampling method to choose the sample  A sample size estimator is constructed Mixture model method  Sample is not enough  Output attributes could be shared among different data sources  Data source: probabilistic data source model generates output attributes with certain probability  Borrowability among data sources: an output attribute is generated from a mixture of different probabilistic data source models

39 Schema Mining: Evaluation Four methods compared  SamplePC: Sampling + Perfect label classifier  SampleRC: Sampling + Real label classifier  Mixture: Mixture model method  Mixture + Sample: SampleRC + Mixture

40 Outline Introduction and Motivation System Core  Query planning problem  Query planning algorithms Other system components  Query caching  Fault tolerance  Schema mining Proposed work

41 Answering Relationship Search over Deep Web Data Sources: Motivation and Formulation Knowledge is only useful when it is related Linked web data Deep web data sources are ideal sources for linked data  Supported by backend relational databases  Data on output pages are related  Deep web data sources are correlated, input and output relation  Deep web data source output pages are hyperlinked with output pages from other data sources Problem Formulation  A relationship query RQ={ke 1,ke 2 }  Find the terms relate ke 1 with ke 2

42 Relationship Query: Proposed Method 1 Use correlation among data sources Q={MSMB, RET}Find the relation between these two genes Connect the data sources taking two genes as input Connect the data source taking one gene as input and another data source taking the other gene as output A modified query planning algorithm introduced in the current work

43 Relationship Query: Proposed Method 2 Use hyperlinks among different output pages to build relation Two-level source-object graph model  Sampled output pages  Extract objects (entities) represented as (data source, object name) pair  Extract hyperlinks on output pages, pointing from one object to another object in different output pages  Data source nodes and object nodes  Data source virtual link edges connect correlated data sources  Hyperlink edges connects hyperlinked object nodes or connects data source node with its corresponding object nodes  Edges are weighted

44 Relationship Query: Graph Model Data source node object node Data source virtual link edge Edge weight Hyperlink edge

45 Relationship Query: Method 2 Algorithm Identify two nodes in the graph as path ends Path weight: multiplication of edge weights Shortest N paths: NP-hard problem Shortest Paths

46 Quality-Aware Data Source Selection based on Functional Dependency Analysis: Motivation Current data source selection method  Coverage  Overlap  relevance Quality-aware data source selection  Data richness  Both sources A and B provide information genes and their encoded proteins  A only considers one encoding schema, but B considers two  B is better than A, but how to detect? Which one is better? Can we find the information we need?

47 Quality-Aware Data Source Selection: Proposed Method (1) Functional dependency  A functional dependency any two tuples t 1 and t 2 that have must have The previous example  Data source A  Data source B Extract functional dependencies  Sampling: data tuples from deep web data sources  Discover functional dependencies

48 Quality-Aware Data Source Selection: Proposed Method (2) A set of data sources Each has a set of functional dependencies Functional dependency lattice  An attribute set  Data source has functional dependency set on

49 Optimized Query Answering over Deep Web Data Sources: Motivation and Formulation Current technique: minimize the number of data sources with benefit function A more interested aspect: minimized the total query plan execution time Optimization problem 1: single query  Minimize response time (ERT), maximize plan quality (RS)  Maximize the plan gain per execution unit Optimization problem 2: multiple queries  Minimize total response time for multiple queries  Scheduling problem, don’t assume similarity among queries

50 Optimized Query Answering: Proposed Methods Optimization for single query  Tabu search framework to find the optimal plan Optimization for multiple queries  Query as a job with a list of tasks  Data sources as machines  Dependencies among task, each task can be performed on a set of machines  Data source response time as machine working time  Job scheduling problem

51 Conclusion SEEDEEP: A System for Exploring and quErying DEEP web data sources Query Planning  Three query planning algorithms  Query planning and execution optimization techniques Other components  Query caching: query-plan-driven  Fault tolerance: redundancy based incrementally query processing  Schema Mining: sampling and mixture model approach Proposed work  New query types: relationship query  Data source selection: quality-aware  New optimization problems: single query and multi-queries


Download ppt "SEEDEEP: A System for Exploring and Querying Deep Web Data Sources Gagan Agrawal Fan Wang, Tantan Liu Ohio State University."

Similar presentations


Ads by Google