Presentation is loading. Please wait.

Presentation is loading. Please wait.

Datalog, View Unfolding, Recursion and Foundations of Integration

Similar presentations


Presentation on theme: "Datalog, View Unfolding, Recursion and Foundations of Integration"— Presentation transcript:

1 Datalog, View Unfolding, Recursion and Foundations of Integration
Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems December 4, 2018 Some slide content courtesy of Susan Davidson, AnHai Doan, Dan Suciu, & Raghu Ramakrishnan

2 An Important Set of Questions
Views are incredibly powerful formalisms for describing how data relates: fn: rel  …  rel  rel Can I define a view recursively? Why might this be useful in the XML construction case? When should the recursion stop? Suppose we have two views, v1 and v2 How do I know whether they represent the same data? If v1 is materialized, can we use it to compute v2? This is fundamental to query optimization and data integration, as we’ll see later

3 Reasoning about Queries and Views
SQL or XQuery are a bit too complex to reason about directly Some aspects of it make reasoning about SQL queries undecidable We need an elegant way of describing views (let’s assume a relational model for now) Should be declarative Should be less complex than SQL Doesn’t need to support all of SQL – aggregation, for instance, may be more than we need

4 Let’s Go Back a Few Weeks… Domain Relational Calculus
Queries have form: {<x1,x2, …, xn>| p } Predicate: boolean expression over x1,x2, …, xn We have the following operations: <xi,xj,…>  R xi op xj xi op const const op xi xi. p xj. p pq, pq p, pq where op is , , , , ,  and xi,xj,… are domain variables; p,q are predicates Recall that this captures the same expressiveness as the relational algebra domain variables predicate

5 A Similar Logic-Based Language: Datalog
Borrows the flavor of the relational calculus but is a “real” query language Based on the Prolog logic-programming language A “datalog program” will be a series of if-then rules (Horn rules) that define relations from predicates Rules are generally of the form: Rout(T1)  R1(T2), R2(T3), …, c(T2 [ … Tn) where Rout is the relation representing the query result, Ri are predicates representing relations, c is an expression using arithmetic/boolean predicates over vars, and Ti are tuples of variables

6 Datalog Terminology An example datalog rule:
idb(x,y)  r1(x,z), r2(z,y), z < 10 Irrelevant variables can be replaced by _ (anonymous var) Extensional relations or database schemas (edbs) are relations only occurring in rules’ bodies – these are base relations with “ground facts” Intensional relations (idbs) appear in the heads – these are basically views Distinguished variables are the ones output in the head Ground facts only have constants, e.g., r1(“abc”, 123) body head subgoals

7 Datalog in Action As in DRC, the output (head) consists of a tuple for each possible assignment of variables that satisfies the predicate We typically avoid “8” in Datalog queries: variables in the body are existential, ranging over all possible values Multiple rules with the same relation in the head represent a union We often try to avoid disjunction (“Ç”) within rules Let’s see some examples of datalog queries (which consist of 1 or more rules): Given Professor(fid, name), Teaches(fid, serno, sem), Courses(serno, cid, desc), Student(sid, name) Return course names other than CIS 550 Return the names of the teachers of CIS 550 Return the names of all people (professors or students)

8 Datalog is Relationally Complete
We can map RA  Datalog: Selection p: p becomes a datalog subgoal Projection A: we drop projected-out variables from head Cross-product r  s: q(A,B,C,D)  r(A,B),s(C,D) Join r ⋈ s: q(A,B,C,D)  r(A,B),s(C,D), condition Union r U s: q(A,B)  r(A,B) ; q(C, D) :- s(C,D) Difference r – s: q(A,B)  r(A,B), : s(A,B) (If you think about it, DRC  Datalog is even easier) Great… But then why do we care about Datalog?

9 View Unfolding We can define views (idbs) using one or more datalog rules: dbirProfs(p,subj) :- Teaches(fid,cid), COURSE(cid,subj,serno,sem), subj=“DB” dbirProfs(p,subj) :- Teaches(fid,cid), COURSE(cid,subj,serno,sem), subj=“IR” Now if we query the relation, we can compose or unfold the query + view: dbProfs(p,nam) :- dbirProfs(p,“DB”), PROFESSOR(p,nam)

10 A Query We Can’t Answer in RA/TRC/DRC…
Recall our example of a binary relation for graphs or trees (similar to an XML Edge relation): edge(from, to) If we want to know what nodes are reachable: reachable(F, T, 1) :- edge(F, T) distance 1 reachable(F, T, 2) :- edge(F, X), edge(X, T) dist. 2 reachable(F, T, 3) :- reachable(F, X, 2), edge(X, T) dist. 3 But how about all reachable paths? (Note this was easy in XPath over an XML representation -- //edge) (another way of writing )

11 Recursive Datalog Queries
Define a recursive query in datalog: reachable(F, T, 1) :- edge(F, T) distance 1 reachable(F, T, D + 1) :- reachable(F, X, D), edge(X, T) distance >1 What does this mean, exactly, in terms of logic? There are actually three different (equivalent) definitions of semantics All make a “closed-world” assumption: facts should exist only if they can be proven true from the input – i.e., assume the DB contains all of the truths out there! The example not-fully-recursive query should be the following: reachable(From, To, 1) :- edge(From, To) reachable(From, To, 2) :- reachable(From, Midpoint, 1), edge(Midpoint, To) reachable(From, To, 3) :- reachable(From, Midpoint, 2), edge(Midpoint, To) ... The first rule gets all connections between points where there's a direct edge. The second rule finds all connections from distance-1 nodes to one further node (i.e., distance-2 connections) The third rule finds all connections between nodes separated by distance 2, followed by yet another connection of distance 1 (distance-3 connections) etc. We can describe it recursively as: reachable(From, To, D+1) :- reachable(From, Midpoint, D), edge(Midpoint, To) which will expand out to the above example. Now this example is somewhat different from our XML example, where we're trying to find all elements at distance k from the *root* node, where k <= the depth of the tree. Here we can create a very similar query: elementsAt(Parent, ID, 1) :- edge(Parent, ID), Parent = NULL elementsAt(Parent, ID, D+1) :- elementsAt(Parent, LastNode, D), edge(LastNode, ID) The base case finds the root node. The recursive case finds the nodes connected by an edge from the root; then the nodes connected by an edge from those; etc.

12 Fixpoint Semantics One of the three Datalog models is based on a notion of fixpoint: We start with an instance of data, then derive all immediate consequences We repeat as long as we derive new facts In the RA, this requires a while loop! However, that is too powerful and needs to be restricted Special case: “inflationary semantics” (which terminates in time polynomial in the size of the database!)

13 Our Query in RA + while (inflationary semantics, no negation)
Datalog: reachable(F, T, 1) :- edge(F, T) reachable(F, T, D+1) :- reachable(F, X, D), edge(X, T) RA procedure with while: reachable += edge ⋈ literal1 while change { reachable += F, T, D(F ! X(edge) ⋈ T ! X,D ! D0(reachable) ⋈ add1) } Note literal1(F,1) and add1(D0,D) are actually arithmetic and literal functions modeled here as relations.

14 Negation in Datalog Datalog allows for negation in rules
It’s essential for capturing RA set difference-style ops: Professor(name), : Student(name) But negation can be tricky… … You may recall that in the DRC, we had a notion of “unsafe” queries, and they return here… Single(X)  Person(X), : Married(X,Y)

15 Safe Rules/Queries Range restriction, which requires that every variable: Occurs at least once in a positive relational predicate in the body, Or it’s constrained to equal a finite set of values by arithmetic predicates Unsafe: q(X)  r(Y) q(X)  : r(X,X) q(X)  r(X) Ç t(Y) Safe: q(X)  r(X,Y) q(X)  X = 5 q(X)  : r(X,X), s(X) q(X)  r(X) Ç (t(Y),u(X,Y)) For recursion, use stratified semantics: Allow negation only over edb predicates Then recursively compute values for the idb predicates that depend on the edb’s (layered like strata)

16 A Special Type of Query: Conjunctive Queries
A single Datalog rule with no “Ç,” “:,” “8” can express select, project, and join – a conjunctive query Conjunctive queries are possible to reason about statically (Note that we can write CQ’s in other languages, e.g., SQL!) We know how to “minimize” conjunctive queries An important simplification that can’t be done for general SQL We can test whether one conjunctive query’s answers always contain another conjunctive query’s answers (for ANY instance) Why might this be useful?

17 Example of Containment
Suppose we have two queries: q1(S,C) :- Student(S, N), Takes(S, C), Course(C, X), inCIS(C), Course(C, “DB & Info Systems”) q2(S,C) :- Student(S, N), Takes(S, C), Course(C, X) Intuitively, q1 must contain the same or fewer answers vs. q2: It has all of the same conditions, except one extra conjunction (i.e., it’s more restricted) There’s no union or any other way it can add more data We can say that q2 contains q1 because this holds for any instance of our DB {Student, Takes, Course}

18 Wrapping up Datalog… We’ve seen a new language, Datalog
It’s basically a glorified DRC with a special feature, recursion It’s much cleaner than SQL for reasoning about … But negation (as in the DRC) poses some challenges We’ve seen that a particular kind of query, the conjunctive query, is written naturally in Datalog Conjunctive queries are possible to reason about We can minimize them, or check containment Conjunctive queries are very commonly used in our next problem, data integration

19 A Problem We’ve seen that even with normalization and the same needs, different people will arrive at different schemas In fact, most people also have different needs! Often people build databases in isolation, then want to share their data Different systems within an enterprise Different information brokers on the Web Scientific collaborators Researchers who want to publish their data for others to use This is the goal of data integration: tie together different sources, controlled by many people, under a common schema

20 Building a Data Integration System
Create a middleware “mediator” or “data integration system” over the sources Can be warehoused (a data warehouse) or virtual Presents a uniform query interface and schema Abstracts away multitude of sources; consults them for relevant data Unifies different source data formats (and possibly schemas) Sources are generally autonomous, not designed to be integrated Sources may be local DBs or remote web sources/services Sources may require certain input to return output (e.g., web forms): “binding patterns” describe these

21 Typical Data Integration Components
Query Results Data Integration System / Mediator Mediated Schema Source Catalog Mappings in Catalog Wrapper Wrapper Wrapper Source Relations

22 Typical Data Integration Architecture
Query Source Descrs. Reformulator Source Catalog Query over sources Query Processor Results Queries + bindings Data in mediated format Wrapper Wrapper Wrapper

23 Challenges of Mapping Schemas
In a perfect world, it would be easy to match up items from one schema with another Every table would have a similar table in the other schema Every attribute would have an identical attribute in the other schema Every value would clearly map to a value in the other schema Real world: as with human languages, things don’t map clearly! May have different numbers of tables – different decompositions Metadata in one relation may be data in another Values may not exactly correspond It may be unclear whether a value is the same

24 Different Aspects to Mapping
Schema matching / ontology alignment How do we find correspondences between attributes? Entity matching / deduplication / record linking / etc. How do we know when two records refer to the same thing? Mapping definition How do we specify the constraints or transformations that let us reason about when to create an entry in one schema, given an entry in another schema? Let’s see one influential approach to schema matching…

25 The Basic Design Pattern (Est. by a System Called LSD)
Suppose user wants to integrate 100 data sources User: manually creates mappings for a few sources, say 3 shows LSD these mappings System learns from the mappings “Multi-strategy” learning incorporates many types of info in a general way Knowledge of constraints further helps System proposes mappings for remaining 97 sources Points: 1) Introduce our approach. 2) We do not manually map the schemas of all sources to mediated schema. The goal is to manually mark up only a few sources, and be able to learn from the marked up sources to successfully propose mappings for subsequent sources. 3) Once the markup is done, there are many different types of information to learn from.

26 Example Mediated schema Learned hypotheses Schema of realestate.com
address price agent-phone description location listed-price phone comments Learned hypotheses Schema of realestate.com If “phone” occurs in the name => agent-phone location Miami, FL Boston, MA ... listed-price $250,000 $110,000 ... phone (305) (617) ... comments Fantastic house Great location ... realestate.com If “fantastic” & “great” occur frequently in data values => description Points: 1) Introduce our approach. 2) We do not manually map the schemas of all sources to mediated schema. The goal is to manually mark up only a few sources, and be able to learn from the marked up sources to successfully propose mappings for subsequent sources. 3) Once the markup is done, there are many different types of information to learn from. homes.com price $550,000 $320,000 ... contact-phone (278) (617) ... extra-info Beautiful yard Great beach ...

27 Multi-Strategy Learning
Use a set of base learners Each exploits well certain types of information: Name learner looks at words in the attribute names Naïve Bayes learner looks at patterns in the data values Etc. Match schema elements of a new source Apply the base learners Each returns a score For different attributes one learner is more useful than another Combine their predictions using a meta-learner Meta-learner Uses training sources to measure base learner accuracy Weighs each learner based on its accuracy Points: 1) Introduce the multi-strategy learning approach Note: In the next several slides we are going to describe the multi-strategy learning approach in more details.

28 Training the Learners Mediated schema Schema of realestate.com
address price agent-phone description location listed-price phone comments Schema of realestate.com Name Learner (location, address) (listed-price, price) (phone, agent-phone) (comments, description) ... <location> Miami, FL </> <listed-price> $250,000</> <phone> (305) </> <comments> Fantastic house </> realestate.com Points: 1) Introduce our approach. 2) We do not manually map the schemas of all sources to mediated schema. The goal is to manually mark up only a few sources, and be able to learn from the marked up sources to successfully propose mappings for subsequent sources. 3) Once the markup is done, there are many different types of information to learn from. <location> Boston, MA </> <listed-price> $110,000</> <phone> (617) </> <comments> Great location </> Naive Bayes Learner (“Miami, FL”, address) (“$ 250,000”, price) (“(305) ”, agent-phone) (“Fantastic house”, description) ...

29 Applying the Learners Schema of homes.com Mediated schema
area day-phone extra-info address price agent-phone description Name Learner Naive Bayes <area>Seattle, WA</> <area>Kent, WA</> <area>Austin, TX</> Meta-Learner (address,0.8), (description,0.2) (address,0.6), (description,0.4) (address,0.7), (description,0.3) Name Learner Naive Bayes Meta-Learner (address,0.7), (description,0.3) <day-phone>(278) </> <day-phone>(617) </> <day-phone>(512) </> Points: 1) Explain how the example learners are applied to match “area” with “address”. 2) Note that in general an arbitrary number of learners can be plugged into the matching process. If we have a new learner that has been trained, we can simply plug it in. This show how easy it is to add a new learner in our approach. (agent-phone,0.9), (description,0.1) <extra-info>Beautiful yard</> <extra-info>Great beach</> <extra-info>Close to Seattle</> (address,0.6), (description,0.4)

30 Two Phases Training Phase Matching Phase Mediated schema
Source schemas Domain Constraints Data listings Training data for base learners User Feedback Constraint Handler L1 L2 Lk Mapping Combination

31 Mappings between Schemas
Schema matchers provide attribute correspondences, but not complete mappings Mappings generally are posed as views: define relations in one schema (typically either the mediated schema or the source schema), given data in the other schema This allows us to “restructure” or “recompose + decompose” our data in a new way We can also define mappings between values in a view We use an intermediate table defining correspondences – a “concordance table” It can be filled in using some type of code, and corrected by hand

32 A Few Mapping Examples T1 T2
Movie(Title, Year, Director, Editor, Star1, Star2) PieceOfArt(ID, Artist, Subject, Title, TypeOfArt) MotionPicture(ID, Title, Year) Participant(ID, Name, Role) PieceOfArt(I, A, S, T, “Movie”) :- Movie(T, Y, A, _, S1, S2), ID = T || Y, S = S1 || S2 Movie(T, Y, D, E, S1, S2) :- MotionPicture(I, T, Y), Participant(I, D, “Dir”), Participant(I, E, “Editor”), Participant(I, S1, “Star1”), Participant(I, S2, “Star2”) T1 CustID CustName 1234 Smith, J. T2 PennID EmpName 46732 John Smith Need a concordance table from CustIDs to PennIDs


Download ppt "Datalog, View Unfolding, Recursion and Foundations of Integration"

Similar presentations


Ads by Google