Data Integration Helena Galhardas DEI IST (based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives)

Slides:



Advertisements
Similar presentations
CHAPTER OBJECTIVE: NORMALIZATION THE SNOWFLAKE SCHEMA.
Advertisements

Lukas Blunschi Claudio Jossen Donald Kossmann Magdalini Mori Kurt Stockinger.
CSE 636 Data Integration Data Integration Approaches.
CHAPTER 3: DESCRIBING DATA SOURCES
Database Systems: Design, Implementation, and Management Tenth Edition
1 Global-as-View and Local-as-View for Information Integration CS652 Spring 2004 Presenter: Yihong Ding.
Amit Shvarchenberg and Rafi Sayag. Based on a paper by: Robin Dhamankar, Yoonkyong Lee, AnHai Doan Department of Computer Science University of Illinois,
Datalog and Data Integration Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems November 12, 2007 LSD Slides courtesy.
An Extensible System for Merging Two Models Rachel Pottinger University of Washington Supervisors: Phil Bernstein and Alon Halevy.
1 Introduction to XML. XML eXtensible implies that users define tag content Markup implies it is a coded document Language implies it is a metalanguage.
©Silberschatz, Korth and Sudarshan1.1Database System Concepts Chapter 1: Introduction Purpose of Database Systems View of Data Data Models Data Definition.
NaLIX: A Generic Natural Language Search Environment for XML Data Presented by: Erik Mathisen 02/12/2008.
Data Integration Techniques Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems October 30, 2003 Some slide content may.
A Review of Ontology Mapping, Merging, and Integration Presenter: Yihong Ding.
Reconciling Schemas of Disparate Data Sources: A Machine-Learning Approach AnHai Doan Pedro Domingos Alon Halevy.
Data Integration Helena Galhardas DEI IST (based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives)
Data Integration Methods Zachary G. Ives University of Pennsylvania CIS 650 – Database & Information Systems February 16, 2004.
Alon Halevy University of Washington Joint work with Anhai Doan and Pedro Domingos Learning to Map Between Schemas Ontologies.
1 Lecture 13: Database Heterogeneity Debriefing Project Phase 2.
2005Integration-intro1 Data Integration Systems overview The architecture of a data integration system:  Components and their interaction  Tasks  Concepts.
Reconciling Schemas of Disparate Data Sources: A Machine-Learning Approach AnHai Doan Pedro Domingos Alon Halevy.
Bridging Different Data Representations Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems November 11, 2004.
Peer Data Management, Concluded and Model Management Zachary G. Ives University of Pennsylvania CIS 650 – Database & Information Systems April 18, 2005.
Describing data sources. Outline Overview Schema mapping languages.
Recursive Views and Global Views Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems November 9, 2004 Some slide content.
BYU Data Extraction Group Funded by NSF1 Brigham Young University Li Xu Source Discovery and Schema Mapping for Data Integration.
1 Information Integration and Source Wrapping Jose Luis Ambite, USC/ISI.
Data Integration and Physical Storage Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems November 15, 2005.
Compe 301 ER - Model. Today DBMS Overview Data Modeling Going from conceptual requirements of a application to a concrete data model E/R Model.
Universe Design Concepts Business Intelligence Copyright © SUPINFO. All rights reserved.
ANHAI DOAN ALON HALEVY ZACHARY IVES Chapter 6: General Schema Manipulation Operators PRINCIPLES OF DATA INTEGRATION.
Learning to Map between Structured Representations of Data
Pedro Domingos Joint work with AnHai Doan & Alon Levy Department of Computer Science & Engineering University of Washington Data Integration: A “Killer.
Semantic Web Technologies Lecture # 2 Faculty of Computer Science, IBA.
RDF (Resource Description Framework) Why?. XML XML is a metalanguage that allows users to define markup XML separates content and structure from formatting.
Databases From A to Boyce Codd. What is a database? It depends on your point of view. For Manovich, a database is a means of structuring information in.
Introduction to Databases
 Copyright 2005 Digital Enterprise Research Institute. All rights reserved. Towards Translating between XML and WSML based on mappings between.
Web-Enabled Decision Support Systems
AnHai Doan, Pedro Domingos, Alon Halevy University of Washington Reconciling Schemas of Disparate Data Sources: A Machine Learning Approach The LSD Project.
Database System Concepts and Architecture
AnHai Doan Pedro Domingos Alon Levy Department of Computer Science & Engineering University of Washington Learning Source Descriptions for Data Integration.
Learning Source Mappings Zachary G. Ives University of Pennsylvania CIS 650 – Database & Information Systems October 27, 2008 LSD Slides courtesy AnHai.
A SURVEY OF APPROACHES TO AUTOMATIC SCHEMA MATCHING Sushant Vemparala Gaurang Telang.
Nancy Lawler U.S. Department of Defense ISO/IEC Part 2: Classification Schemes Metadata Registries — Part 2: Classification Schemes The revision.
Databases From A to Boyce Codd. What is a database? It depends on your point of view. For Manovich, a database is a means of structuring information in.
Querying Structured Text in an XML Database By Xuemei Luo.
Data Integration Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems October 16, 2015 LSD Slides courtesy AnHai Doan.
1 Lessons from the TSIMMIS Project Yannis Papakonstantinou Department of Computer Science & Engineering University of California, San Diego.
©2003 Paula Matuszek CSC 9010: Text Mining Applications Document Summarization Dr. Paula Matuszek (610)
Lecture2: Database Environment Prepared by L. Nouf Almujally & Aisha AlArfaj 1 Ref. Chapter2 College of Computer and Information Sciences - Information.
Data Integration, Concluded Physical Data Storage Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems October 25, 2015.
2007. Software Engineering Laboratory, School of Computer Science S E Web-Harvest Web-Harvest: Open Source Web Data Extraction tool 이재정 Software Engineering.
Requirements Engineering Methods for Requirements Engineering Lecture-30.
Object-Oriented Modeling: Static Models. Object-Oriented Modeling Model the system as interacting objects Model the system as interacting objects Match.
Data Integration Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems November 14, 2007.
Bridging Different Data Representations Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems October 28, 2003 Some slide.
Data Integration Hanna Zhong Department of Computer Science University of Illinois, Urbana-Champaign 11/12/2009.
Issues in Ontology-based Information integration By Zhan Cui, Dean Jones and Paul O’Brien.
Semantic Mappings for Data Mediation
Semi-structured Data In many applications, data does not have a rigidly and predefined schema: –e.g., structured files, scientific data, XML. Managing.
Semantic Web COMS 6135 Class Presentation Jian Pan Department of Computer Science Columbia University Web Enhanced Information Management.
Data Integration Approaches
Semantic Data Extraction for B2B Integration Syntactic-to-Semantic Middleware Bruno Silva 1, Jorge Cardoso 2 1 2
1 Integration of data sources Patrick Lambrix Department of Computer and Information Science Linköpings universitet.
Welcome: To the fifth learning sequence “ Data Models “ Recap : In the previous learning sequence, we discussed The Database concepts. Present learning:
Of 24 lecture 11: ontology – mediation, merging & aligning.
AnHai Doan, Pedro Domingos, Alon Halevy University of Washington
Learning to Map Between Schemas Ontologies
Semi-structured Data In many applications, data does not have a rigidly and predefined schema: e.g., structured files, scientific data, XML. Managing such.
Presentation transcript:

Data Integration Helena Galhardas DEI IST (based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives)

Agenda Overview of Data Integration Some systems:  The LSD System  The TSIMMIS System  Information Manifold

A Problem Even with normalization and the same needs, different people will arrive at different schemas In fact, most people also have different needs! Often people build databases in isolation, then want to share their data  Different systems within an enterprise  Different information brokers on the Web  Scientific collaborators  Researchers who want to publish their data for others to use This is the goal of data integration: tie together different sources, controlled by many people, under a common schema

Motivating example (1) FullServe: company that provides internet access to homes, but also sells products to support the home computing infrastructure (ex: modems, wireless routers, etc) FullServe is a predominantly American company and decided to acquire EuroCard, an European company that is mainly a credit card provider, but has recently started leveraging its customer base to enter the internet market

Motivating Example (2) FullServe databases: Employee Database FullTimeEmp(ssn, empId, firstName, middleName, lastName) Hire(empId, hireDate, recruiter) TempEmployees(ssn, hireStart, hireEnd, name, hourlyRate) Training Database Courses(courseID, name, instructor) Enrollments(courseID, empID, date) Services Database Services(packName, textDescription) Customers(name, id, zipCode, streetAdr, phone) Contracts(custID, packName, startDate) Sales Database Products(prodName, prodId) Sales(prodName, customerName, address) Resume Database Interview(interviewDate, name, recruiter, hireDecision, hireDate) CV(name, resume) HelpLine Database Calls(date, agent, custId, text, action)

Motivating Example (3) EuroCard databases: Employee Database Emp(ID, firstNameMiddleInitial, lastName) Hire(ID, hireDate, recruiter) CreditCard Database Customer(CustID, cardNum, expiration, currentBalance) CustDetail(CustID, name, address) Resume Database Interview(ID, date, location, recruiter) CV(name, resume) HelpLine Database Calls(date, agent, custId, description, followup)

Motivating Example (4) Some queries employees or managers in FullServe may want to pose: The Human Resources Department may want to be able to query for all of its employees whether in the US or in Europe  Require access to 2 databases in the American side and 1 in the European side There is a single customer support hot-line, where customers can call about any service or product they obtain from the company.When a representative is on the phone with a customer, it´s important to see the entire set of services the customer is getting from FullServe (internet service, credit card or products purchased). Furthermore, it is useful to know that the customer is a big spender on its credit card.  Require access to 2 databases in the US side and 1 in the European side.

Another example: searching for a new job (1)

Another example: searching for a new job (2) Each form (site) asks for a slighly different set of attributes (ex: keywords describing job, location and job category or employer and job type) Ideally, would like to have a single web site to pose our queries and have that site integrating data from all relevant sites in the Web,

Goal of data integration Offer uniform access to a set of data autonomous and heterogeneous data sources:  Querying disparate sources  Large number of sources  Heterogeneous data sources (different systems, diff. Schemas, some structured others unstructured)  Autonomous data sources: we may not have full access to the data or source may not be available all the time.

Why is it hard? Systems reasons: even with the same HW and all relational sources, the SQL supported is not always the same Logical reasons: different schemas (e.g. FullServe and EuroCard temporary employees), diff attributes (e.g., ID), diff attribute names for the same knowledge (e.g., text and action), diff. Representations of data (e.g. First-name, last name) also known as semantic heterogeneity Social and administrative reasons

Building a Data Integration System Create a middleware “mediator” or “data integration system” over the sources  Can be warehoused (a data warehouse) or virtual  Presents a uniform query interface and schema  Abstracts away multitude of sources; consults them for relevant data Unifies different source data formats (and possibly schemas) Sources are generally autonomous, not designed to be integrated  Sources may be local DBs or remote web sources/services  Sources may require certain input to return output (e.g., web forms): “binding patterns” describe these

Logical components of a virtual integration system Programs whose role is to send queries to a data source, receive answers and possibly apply some transformation to the answer Is built for the data integration application and contains only the aspects of the domain relevant to the application. Most probably will contain a subset of the attributes seen in sources Specify the properties of the sources the system needs to know to use their data. Main component are semantic mappings that specify how attributes in the sources correspond to attributes in the mediated schema. Other info is whether sources are complete

A data integration scenario

Components of a data integration system

Query reformulation (1) Rewrite the user query that was posed in terms of the relations in the mediated schema, into queries referring to the schemas of data sources Result is called a logical query plan Ex: SELECT title, startTime FROM Movie, Plays WHERE Movie.title = Plays.movie AND location=”New York” AND director=”Woody Allen”

Query reformulation (2) Tuples for Movie can be obtained from source S1 but attribute title needs to be reformulated to name Tuples for Plays can be obtained from S2 or S3. Since S2 is complete for showings in NY, we choose it Since source S3 requires the title of a movie as input and the title is not secified in the query, the query plan must first access S1 and then feed the movie titles returned from S1 as inputs to S3.

Query optimization Acepts a logical query plan as input and produces a physical query plan  Specifies the exact order in which sources are accessed, when results are combined, which algorithms are used for performing operations on the data

Query execution Responsible for the execution of the physical query plan  Dispatches the queries to the individual sources through the wrappers and combines the results as specified by the query plan. Also may ask the optimizer to reconsider its plan based on its monitoring of the plan’s progress (e.g., if source S3 is slow)

Challenges of Mapping Schemas In a perfect world, it would be easy to match up items from one schema with another  Every table would have a similar table in the other schema  Every attribute would have an identical attribute in the other schema  Every value would clearly map to a value in the other schema Real world: as with human languages, things don’t map clearly!  May have different numbers of tables – different decompositions  Metadata in one relation may be data in another  Values may not exactly correspond  It may be unclear whether a value is the same

Different Aspects to Mapping Schema matching / ontology alignment How do we find correspondences between attributes? Entity matching / deduplication / record linkage / etc. How do we know when two records refer to the same thing? Mapping definition  How do we specify the constraints or transformations that let us reason about when to create an entry in one schema, given an entry in another schema?

Why Schema Matching is Important Enterprise 1 Enterprise 2 Knowledge Base 1 World-Wide Web Knowledge Base 2 Home users Data integration Data translation Data warehousing E-commerce Data integration Ontology Matching Application has more than one schema need for schema matching! Information agent

Why Schema Matching is Difficult No access to exact semantics of concepts  Semantics not documented in sufficient details  Schemas not adequately expressive to capture semantics Must rely on clues in schema & data  Using names, structures, types, data values, etc. Such clues can be unreliable  Synonyms: Different names => same entity: area & address => location  Homonyms: Same names => different entities: area => location or square-feet Done manually by domain experts  Expensive and time consuming

AnHai Doan, Pedro Domingos, Alon Halevy University of Washington SIGMOD 2001 Reconciling Schemas of Disparate Data Sources: A Machine Learning Approach The LSD Project

Charlie comes to town Find houses with 2 bedrooms priced under 300K homes.comrealestate.comhomeseekers.com

Data Integration mediated schema homes.comrealestate.com source schema 2 homeseekers.com wrapper source schema 3source schema 1 Find houses with 2 bedrooms priced under 300K

Suppose user wants to integrate 100 data sources 1. User  manually creates mappings for a few sources, say 3  shows LSD these mappings 2. LSD learns from the mappings 3. LSD proposes mappings for remaining 97 sources The LSD (Learning Source Descriptions) Approach

Semantic Mappings between Schemas Mediated & source schemas = XML DTDs house location contact house address name phone num-baths full-bathshalf-baths contact-info agent-name agent-phone 1-1 mappingnon 1-1 mapping

listed-price $250,000 $110, address price agent-phone description Example location Miami, FL Boston, MA... phone (305) (617) comments Fantastic house Great location... realestate.com location listed-price phone comments Schema of realestate.com If “fantastic” & “great” occur frequently in data values => description Learned hypotheses price $550,000 $320, contact-phone (278) (617) extra-info Beautiful yard Great beach... homes.com If “phone” occurs in the name => agent-phone Mediated schema

LSD Contributions 1. Use of multi-strategy learning  well-suited to exploit multiple types of knowledge  highly modular & extensible 2. Extend learning to incorporate constraints  handle a wide range of domain & user-specified constraints 3. Develop XML learner  exploit hierarchical nature of XML

Multi-Strategy Learning Use a set of base learners  Each exploits well certain types of information: Name learner looks at words in the attribute names Naïve Bayes learner looks at patterns in the data values Etc. Match schema elements of a new source  Apply the base learners Each returns a score For different attributes one learner is more useful than another  Combine their predictions using a meta-learner Meta-learner  Uses training sources to measure base learner accuracy  Weights each learner based on its accuracy

Base Learners Input  schema information: name, proximity, structure,...  data information: value, format,... Output  prediction weighted by confidence score Examples  Name learner agent-name => (name,0.7), (phone,0.3)  Naive Bayes learner “Kent, WA” => (address,0.8), (name,0.2) “Great location” => (description,0.9), (address,0.1)

The two phases of LSD L1L1 L2L2 LkLk Mediated schema Source schemas Data listings Training data for base learners Constraint Handler Mapping Combination User Feedback Domain Constraints Base learners: Name Learner, XML learner, Naive Bayes, Whirl learner Meta-learner  uses stacking [Ting&Witten99, Wolpert92]  returns linear weighted combination of base learners’ predictions Matching PhaseTraining Phase

Training phase 1. Manually specify 1-1 mappings for several sources 2. Extract source data 3. Create training data for each base learner 4. Train the base learners 5. Train the meta-learner

Boston, MA $110,000 (617) Great location Miami, FL $250,000 (305) Fantastic house Training the Learners Naive Bayes Learner (location, address) (listed-price, price) (phone, agent-phone) (comments, description)... (“Miami, FL”, address) (“$ 250,000”, price) (“(305) ”, agent-phone) (“Fantastic house”, description)... realestate.com Name Learner address price agent-phone description Schema of realestate.com Mediated schema location listed-price phone comments

The matching phase 1. Extract and collect data 2. Match each source-DTD tag 3. Apply the constraint handler

Figure 5 - SIGMOD’01 paper

Beautiful yard Great beach Close to Seattle (278) (617) (512) Seattle, WA Kent, WA Austin, TX Applying the Learners Name Learner Naive Bayes Meta-Learner (address,0.8), (description,0.2) (address,0.6), (description,0.4) (address,0.7), (description,0.3) (address,0.6), (description,0.4) Meta-Learner Name Learner Naive Bayes (address,0.7), (description,0.3) (agent-phone,0.9), (description,0.1) address price agent-phone description Schema of homes.com Mediated schema area day-phone extra-info

Domain Constraints Impose semantic regularities on sources  verified using schema or data Examples  a = address & b = address a = b  a = house-id a is a key  a = agent-info & b = agent-name b is nested in a Can be specified up front  when creating mediated schema  independent of any actual source schema

area: address contact-phone: agent-phone extra-info: description area: address contact-phone: agent-phone extra-info: address area: (address,0.7), (description,0.3) contact-phone: (agent-phone,0.9), (description,0.1) extra-info: (address,0.6), (description,0.4) The Constraint Handler Can specify arbitrary constraints User feedback = domain constraint  ad-id = house-id Extended to handle domain heuristics  a = agent-phone & b = agent-name a & b are usually close to each other Domain Constraints a = address & b = adderss a = b Predictions from Meta-Learner

Existing learners flatten out all structures Developed XML learner  similar to the Naive Bayes learner input instance = bag of tokens  differs in one crucial aspect consider not only text tokens, but also structure tokens Exploiting Hierarchical Structure Victorian house with a view. Name your price! To see it, contact Gail Murphy at MAX Realtors. Gail Murphy MAX Realtors

Related Work Rule-based approaches  TRANSCM [Milo&Zohar98], ARTEMIS [Castano&Antonellis99], [Palopoli et. al. 98], CUPID [Madhavan et. al. 01]  utilize only schema information Learner-based approaches  SEMINT [Li&Clifton94], ILA [Perkowitz&Etzioni95]  employ a single learner, limited applicability Others  DELTA [Clifton et. al. 97], CLIO [Miller et. al. 00][Yan et. al. 01] Multi-strategy learning in other domains  series of workshops [91,93,96,98,00]  [Freitag98], Proverb [Keim et. al. 99]

Summary LSD project  applies machine learning to schema matching Main ideas & contributions  use of multi-strategy learning  extend learning to handle domain & user-specified constraints  develop XML learner System design: A contribution to generic schema-matching  highly modular & extensible  handle multiple types of knowledge  continuously improve over time

End LSD

Mappings between Schemas LSD provides attribute correspondences, but not complete mappings Mappings generally are posed as views: define relations in one schema (typically either the mediated schema or the source schema), given data in the other schema  This allows us to “restructure” or “recompose + decompose” our data in a new way We can also define mappings between values in a view  We use an intermediate table defining correspondences – a “concordance table”  It can be filled in using some type of code, and corrected by hand

A Few Mapping Examples Movie(Title, Year, Director, Editor, Star1, Star2) PieceOfArt(ID, Artist, Subject, Title, TypeOfArt) MotionPicture(ID, Title, Year) Participant(ID, Name, Role) CustIDCustName 1234Smith, J. PennIDEmpName 46732John Smith PieceOfArt(I, A, S, T, “Movie”) :- Movie(T, Y, A, _, S1, S2), I = T || Y, S = S1 || S2 Movie(T, Y, D, E, S1, S2) :- MotionPicture(I, T, Y), Participant(I, D, “Dir”), Participant(I, E, “Editor”), Participant(I, S1, “Star1”), Participant(I, S2, “Star2”) T1 T2 Need a concordance table from CustIDs to PennIDs

Two Important Approaches TSIMMIS [Garcia-Molina+97] – Stanford  Focus: semistructured data (OEM), OQL-based language (Lorel)  Creates a mediated schema as a view over the sources  Spawned a UCSD project called MIX, which led to a company now owned by BEA Systems  Other important systems of this vein: Penn Information Manifold [Levy+96] – AT&T Research  Focus: local-as-view mappings, relational model  Sources defined as views over mediated schema  Led to peer-to-peer integration approaches (Piazza, etc.) Focus: Web-based queriable sources

TSIMMIS One of the first systems to support semi- structured data, which predated XML by several years: “OEM” An instance of a “global-as-view” mediation system  We define our global schema as views over the sources

XML vs. Object Exchange Model Bernstein Newcomer Principles of TP Chamberlin DB2 UDB O1: book { O2: author { Bernstein } O3: author { Newcomer } O4: title { Principles of TP } } O5: book { O6: author { Chamberlin } O7: title { DB2 UDB } }

Queries in TSIMMIS Specified in OQL-style language called Lorel  OQL was an object-oriented query language that looks like SQL  Lorel is, in many ways, a predecessor to XQuery Based on path expressions over OEM structures: select book where book.title = “DB2 UDB” and book.author = “Chamberlin” This is basically like XQuery, which we’ll use in place of Lorel and the MSL template language. Previous query restated = for $b in AllData()/book where $b/title/text() = “DB2 UDB” and $b/author/text() = “Chamberlin” return $b

Query Answering in TSIMMIS Basically, it’s view unfolding, i.e., composing a query with a view  The query is the one being asked  The views are the MSL templates for the wrappers  Some of the views may actually require parameters, e.g., an author name, before they’ll return answers Common for web forms (see Amazon, Google, …) XQuery functions (XQuery’s version of views) support parameters as well, so we’ll see these in action

Example: View Unfolding/Expansion A view consisting of branches and their customers Find all customers of the Perryridge branch create view all_customer as (select branch_name, customer_name from depositor, account where depositor.account_number = account.account_number ) union (select branch_name, customer_name from borrower, loan where borrower.loan_number = loan.loan_number ) select customer_name from all_customer where branch_name = 'Perryridge'

A Wrapper Definition in MSL Wrappers have templates and binding patterns ($X) in MSL: B :- B: }> // $$ = “select * from book where author=“ $X //  This reformats a SQL query over Book(author, year, title) In XQuery, this might look like: define function GetBook($x AS xsd:string) as book { for $b in sql(“Amazon.DB”, “select * from book where author=‘” + $x +”’”) return {$b/title} $x } book title author … … … The union of GetBook’s results is unioned with others to form the view Mediator()

How to Answer the Query Given our query: for $b in Mediator()/book where $b/title/text() = “DB2 UDB” and $b/author/text() = “Chamberlin” return $b Find all wrapper definitions that:  Contain enough “structure” to match the conditions of the query  Or have already tested the conditions for us!

Query Composition with Views We find all views that define book with author and title, and we compose the query with each: define function GetBook($x AS xsd:string) as book { for $b in sql(“Amazon.DB”, “select * from book where author=‘” + $x + “’”) return {$b/title} {$x} } for $b in Mediator()/book where $b/title/text() = “DB2 UDB” and $b/author/text() = “Chamberlin” return $b book title author … …

Matching View Output to Our Query’s Conditions Determine that $b/book/author/text()  $x by matching the pattern on the function’s output: define function GetBook($x AS xsd:string) as book { for $b in sql(“Amazon.DB”, “select * from book where author=‘” + $x + “’”) return { $b/title } {$x} } let $x := “Chamberlin” for $b in GetBook($x)/book where $b/title/text() = “DB2 UDB” return $b book title author … …

The Final Step: Unfolding let $x := “Chamberlin” for $b in ( for $b’ in sql(“Amazon.com”, “select * from book where author=‘” + $x + “’”) return { $b/title } {$x} )/book where $b/title/text() = “DB2 UDB” return $b How do we simplify further to get to here? for $b in sql(“Amazon.com”, “select * from book where author=‘Chamberlin’”) where $b/title/text() = “DB2 UDB” return $b

Virtues of TSIMMIS Early adopter of semistructured data, greatly predating XML  Can support data from many different kinds of sources  Obviously, doesn’t fully solve heterogeneity problem Presents a mediated schema that is the union of multiple views  Query answering based on view unfolding Easily composed in a hierarchy of mediators

Limitations of TSIMMIS’ Approach Some data sources may contain data with certain ranges or properties  “Books by Aho”, “Students at UPenn”, …  If we ask a query for students at Columbia, don’t want to bother querying students at Penn…  How do we express these? Mediated schema is basically the union of the various MSL templates – as they change, so may the mediated schema

Schema mapping languages Schema mapping: set of expressions that describe a relationship between a set of schemata (typically two). In our case, mediator schema and the schema of the sources  Used to reformulate a query formulated in terms of the mediated schema into appropriate queries on the sources.  Result is called logical query plan (refers only to the relations in the data sources) Schema mapping languages: global-as-view, local-as-view, global-local-as-view.

Components of a data integration system

Properties of mapping languages Flexibility: the formalism should be able to express a wide variety of relationships between schemata. Efficient reformulation: reformulation algorithms should have well understood properties and be efficient Easy update: Must be easy to add and remove sources

An Alternate Approach: The Information Manifold (Levy et al.) When you integrate something, you have some conceptual model of the integrated domain  Define that as a basic frame of reference, everything else as a view over it  “Local as View” May have overlapping/incomplete sources  Define each source as the subset of a query over the mediated schema  We can use selection or join predicates to specify that a source contains a range of values: ComputerBooks(…)  Books(Title, …, Subj), Subj = “Computers”

The Local-as-View Model The basic model is the following:  “Local” sources are views over the mediated schema  Sources have the data – mediated schema is virtual  Sources may not have all the data from the domain – “open-world assumption” The system must use the sources (views) to answer queries over the mediated schema

Query Answering Assumption: conjunctive queries, set semantics Suppose we have a mediated schema: author(aID, isbn, year), book(isbn, title, publisher) the query: q(a, t) :- author(a, i, _), book(i, t, p), t = “DB2 UDB” and sources: s1(a,t)  author(a, i, _), book(i, t, p), t = “123” … s5(a, t, p)  author(a, i, _), book(i,t), p = “SAMS” We want to compose the query with the source mappings – but they’re in the wrong direction!  Yet: everything in s1, s5 is an answer to the query!  The idea is to determine which views may be relevant to each subgoal of the query in isolation

Answering Queries Using Views Numerous recently-developed algorithms for these  Inverse rules [Duschka et al.]  Bucket algorithm [Levy et al.]  MiniCon [Pottinger & Halevy]  Also related: “chase and backchase” [Popa, Tannen, Deutsch] Requires conjunctive queries

Advantages and Shortcomings of LAV Enables expressing incomplete information More robust way of defining mediated schemas and sources Mediated schema is clearly defined, less likely to change Sources can be more accurately described Computationally more expensive!

Summary of Data Integration Integration requires standardization on a single schema  Can be hard to get consensus  Today we have peer-to-peer data integration, e.g., Piazza [Halevy et al.], Orchestra [Ives et al.], Hyperion [Miller et al.] Some other aspects of integration were addressed in related papers  Overlap between sources; coverage of data at sources  Semi-automated creation of mappings and wrappers Data integration capabilities in commercial products: BEA’s Liquid Data, IBM’s DB2 Information Integrator, numerous packages from middleware companies

Referências Draft of the book on Data Integration by Alon Halevy (in preparation). AnHai Doan, Pedro Domingos, Alon Halevy, “Reconciling Schemas of Disparate Data Sources: A Machine Learning Approach”, SIGMOD Hector Garcia-Molina et al, “ The TSIMMIS Approach to Mediation: Data Models and Languages”, Journal of Intelligent Information Systems (JIIS), 8 (2) : , Alon Levy et al, “Querying Heterogeneous Information Sources using Source Descriptions”, VLDB 1996.