Presentation is loading. Please wait.

Presentation is loading. Please wait.

19 January 2007 Data Quality Meeting Alex Poulovassilis.

Similar presentations

Presentation on theme: "19 January 2007 Data Quality Meeting Alex Poulovassilis."— Presentation transcript:

1 19 January 2007 Data Quality Meeting Alex Poulovassilis

2 19 January 2007 Some current & recent research projects AutoMed (EPSRC, BBSRC, MoD) – has developed tools for semi-automatic transformation and integration of heterogeneous information sources – provides a single framework for data cleansing/transformation/integration – can handle both structured and semi-structured (RDF/S, XML) data; virtual, materialised and hybrid integration scenarios; bottom-up, top-down and P2P data integration ISPIDER (BBSRC) – is developing an integrated platform of proteomic data sources – in collaboration with groups at EBI, Manchester, UCL – is using AutoMed, in conjunction with OGSA-DAI, DQP, Taverna – to support biological data integration and web service interoperability

3 19 January 2007 Some current & recent research projects SeLeNe (EU) – technologies for syndication and personalisation of learning resources: semantic reconciliation and integration of heterogeneous educational metadata, structured and unstructured querying of learning object descriptions, including through virtual views (RQL/RVL) automatic propagation notification of changes in the descriptions of learning objects – our XML and RDF ECA rule processing languages and systems were developed in this context L4All and MyPlan (JISC) –new techniques to support personalised planning of lifelong learning –developing a system that allows users to record and share learning pathways through courses and modules in the London area – in collaboration with IoE, Community College Hackney, UCAS, LearnDirect, Linking London Lifelong Learning Network

4 19 January 2007 The AutoMed Project Partners: Birkbeck and Imperial Colleges Data integration based on schema equivalence Low-level metamodel, the Hypergraph Data Model (HDM), in terms of which higher-level modelling languages are defined – extensible therefore with new modelling languages Automatically provides a set of primitive equivalence-preserving schema transformations for higher-level modelling languages: addT(c,q,C) deleteT(c,q,C) renameT(c,n,n,C) There are also two more primitive transformations for capturing imprecise knowledge: extendT(c,Range q q,C) contractT(c,Range q q,C)

5 19 January 2007 AutoMed Features Schema transformations are automatically reversible: addT/deleteT(c,q,C) by deleteT/addT(c,q,C) extendT(c,Range q1 q2,C) by contractT(c,Range q1 q2,C) renameT(c,n,n,C) by renameT(c,n,n,C) Hence bi-directional transformation pathways (more generally networks) are defined between schemas i.e. both-as-view (BAV) transformation/integration The queries within transformations allow automatic data translation, query translation and data lineage tracing Schemas may or may not have a data source associated with them; thus, virtual, materialised or hybrid integration can be supported

6 19 January 2007 Schema Transformation/Integration Networks US1US2USiUSn LS1LS2LSiLSn GS id … … … …

7 19 January 2007 AutoMed Architecture Global Query Processor Global Query Optimiser Schema Evolution Tool Schema Transformation and Integration Tools Model Definition Tool Schema and Transformation Repository Model Definitions Repository Wrapper

8 19 January 2007 Data warehousing scenario

9 19 January 2007 ISPIDER Project Partners: Birkbeck, EBI, Manchester, UCL Aims: Vast, heterogeneous biological data Need for interoperability Need for efficient processing Development of Proteomics Grid Infrastructure, use existing proteomics resources and develop new ones, develop new proteomics clients for querying, visualisation, workflow etc.

10 19 January 2007 Project Aims

11 19 January 2007 my Grid / DQP / AutoMed my Grid: collection of services/components allowing high-level integration of data/applications for in-silico biology experiments DQP: OGSA-DAI (Open Grid Services Architecture Data Access and Integration) Distributed query processing over OGSA-DAI enabled resources AutoMed + DQP: interoperation for integration and query processing over heterogeneous data resources AutoMed + my Grid : interoperation for processing workflows incorporating heterogeneous services and resources

12 19 January 2007 Recent/current AutoMed research Using AutoMed for virtual data integration: BAV query processing: integrates GAV and LAV techniques supporting source or target schema evolution Using AutoMed for materialised data integration: incremental view maintenance data lineage tracing Lucas Zamboulis has been working on techniques for automatically transforming and integrating XML data Has also investigated using correspondences to ontologies – RDFS schemas – to enhance these techniques

13 19 January 2007 Other recent/ongoing AutoMed research Dean Williams has been working on extracting structure from unstructured text sources The aim here is to integrate information extracted from unstructured text with structured information available from other sources, using IE techniques in conjunction with AutoMed Dean has used existing IE technology (the GATE tool from Sheffield) for the text annotation and IE part of this work P2P query and update processing over AutoMed pathways Extension with ECA rules and a P2P ECA rule execution engine – Sandeep Mittal – will allow automatic propagation of updates e.g. for view and constraint maintenance Planning to undertake further investigation of constraints and conditional data transformation/integration

14 19 January 2007 Some possible synergies with the proposed data quality project AutoMed & BAV provide a single framework to support data cleansing, transformation and integration Applicable in a broad range of integration scenarios (top-down, bottom-up, P2P; virtual, materialised, hybrid) Schema transformations can, optionally, be accompanied by a constraint, giving the possibility of investigating conditional data transformation and integration Schema transformations can be used to propagate data forwards (view maintenance) and backwards (lineage tracing) – it would be interesting to see what other information could be propagated e.g. accuracy and timeliness of data Flexible global query processing could be used to support imprecise/incomplete data integration

15 19 January 2007 Extra slides

16 19 January 2007 Schema Transformation/Integration Networks (contd) On the previous slide: GS is a global schema LS1, …, LSn are local schemas US1, …, USn are union-compatible schemas the transformation pathways between each pair LSi and USi may consist of add, delete, rename, expand and contract primitive transformation, operating on any modelling construct defined in the AutoMed Model Definitions Repository the transformation pathway between USi and GS is similar the transformation pathway between each pair of union- compatible schemas consists of id transformation steps

17 19 January 2007 Comparison with GAV & LAV Data Integration Global-As-View (GAV) approach: specify GS constructs by view definitions over LS i constructs Local-As-View (LAV) approach: specify LS constructs by view definitions over GS constructs

18 19 January 2007 GAV Example student(id,name,left,degree) = [ x,y,z,w | x,y,z,w,_ ug x,_,_,_,_ phd x,y,z,w,_ phd w = phd] monitors(sno,id) = [ x,y | x,_,_,_,y ug x,_,_,_,_ phd x,y supervises] staff(sno,sname,dept) = [ x,y,z | x,y,z,w,_ tutor x,_,_ supervisor x,y,z supervisor]

19 19 January 2007 LAV Example tutor(sno,sname) = [ x,y | x,y,_ staff x,z monitors z,_,_,w student w phd] ug(id,name,left,degree,sno) = [ x,y,z,w,v | x,y,z,w student v,x monitors w phd] phd, supervises, supervisor are defined similarly

20 19 January 2007 Evolution problems of GAV and LAV GAV does not readily support evolution of local schemas e.g. adding an age attribute to phd invalidates some of the global view definitions In LAV, changes to a local schema impact only the derivation rules defined for that schema e.g. adding an age attribute to phd affects only the rule defining phd But LAV has problems if one wants to evolve the global schema since all the rules defining local schema constructs in terms of the global schema would need to be reviewed These problems are exacerbated in P2P data integration scenarios where there is no distinction between local and global schemas

21 19 January 2007 AutoMed approach, Growing Phase assuming initially a schema U = S 1 + S 2 addRel( >, [x | x > x >]) addAtt( >, [ | ( > x >) >]) addAtt( >, [ | ( > x >) >]) …

22 19 January 2007 AutoMed approach, Shrinking Phase (contd) contrAtt( >, Range Void Any) delAtt( >, [ | > x >]) delAtt( >, [ | > x >]) delRel( >, [x | x > >]) Similarly deletions for supervises and supervisor

23 19 January 2007 AutoMed approach, `Shrinking Phase contrAtt( >, Range [ | > >] Any) contrRel( >, Range [x | x > >] Any) Similarly contractions for the ug attributes and relation

24 19 January 2007 Schema Evolution in BAV Unlike GAV/LAV/GLAV, BAV framework readily supports the evolution of both local and global schemas The evolution of the global or local schema is specified by a schema transformation pathway from the old to the new schema For example, the figure on the right shows transformation pathways T from an old to a new global or local schema Global Schema S New Global Schema S T New Local Schema S Local Schema S T

25 19 January 2007 Global Schema Evolution Each transformation step t in T:S S is considered in turn if t is an add, delete or rename then schema equivalence is preserved and there is nothing further to do (except perhaps optimise the extended transformation pathway); the extended pathway can be used to regenerate the necessary GAV or LAV views if t is a contract then there will be information present in S that is no longer available in S; again there is nothing further to do if t is an extend then domain knowledge is required to determine if the new construct in S can in fact be derived from existing constructs; if not, there is nothing further to do; if yes, the extend step is replaced by an add step

26 19 January 2007 Local Schema Evolution This is a bit more complicated as it may require changes to be propagated also to the global schema(s) Again each transformation step t in T:S S is considered in turn In the case that t is an add, delete, rename or contract step, the evolution can be carried out automatically If it is an extend, then domain knowledge is required See our CAiSE02, ICDE03 and ER04 papers for more details The last of these discusses a materialised data integration scenario where the old/new global/local schemas have an extent

27 19 January 2007 Global Query Processing We handle query language heterogeneity by translation into/from a functional intermediate query language – IQL A query Q expressed in a high-level query language on a schema S is first translated into IQL (this functionality is not yet supported in the AutoMed toolkit) View definitions are derived from the transformation pathways between S and the requested data source schemas These view definitions are substituted into Q, reformulating it into an IQL query over source schema constructs

28 19 January 2007 Global Query Processing (contd) Query optimisation (currently algebraic) and query evaluation then occur During query evaluation, the evaluator submits to wrappers sub-queries that they are able to translate into the local query language. Currently, AutoMed supports wrappers for SQL, OQL, XPath, XQuery and flat-file data sources The wrappers translate sub-query results back into the IQL type system Further query post-processing then occurs in the IQL evaluator

29 19 January 2007 Other AutoMed research at Imperial Automatic generation of equivalences between different data models A graphical schema & transformations editor Data mining techniques for extracting schema equivalences Optimising schema transformation pathways

30 19 January 2007 DQP – AutoMed Interoperability Data sources wrapped with OGSA-DAI AutoMed OGSA-DAI wrappers extract data sources metadata Semantic integration of data sources using AutoMed transformation pathways into an integrated AutoMed schema IQL queries submitted to this integrated schema are: Reformulated to IQL queries on the data sources, using the AutoMed transformation pathways Submitted to DQP for evaluation

31 19 January 2007 Data source schema extraction AutoMed wrapper requests the schema of the data source using an OGSA-DAI service The service replies with the source schema encoded in XML The AutoMed wrapper creates the corresponding schema in the AutoMed repository

32 19 January 2007 Using AutoMed for in the BioMap Project Relational/XML data sources containing protein sequence, structure, function and pathway data; gene expression data; other experimental data Wrapping of data sources Translation of source and global schemas into AutoMeds XML schema Domain expert provides matchings between constructs in source and global schemas Automatic schema restructuring, with automatic generation of schema transformation pathways See DILS05 paper for more details RDB XML File RDB AutoMed Relational Schema AutoMed Integrated Schema AutoMed XMLDSS Schema AutoMed Relational Schema XML Wrapper RDB Wrapper RDB Wrapper T r a n s f o r m a t i o n p a t h w a y T r a n s f o r m a t i o n p a t h w a y T r a n s f o r m a t i o n p a t h w a y Integrated Database Wrapper Integrated Database …..

33 19 January 2007 purpose designed building Science Research Infrastructure Fund: £ 6m Research staff and students: 50 Location: Bloomsbury Open: June 2004 Institute of Education University of London Birkbeck College University of London Social scientists Experts in education, sociology, culture and media, semiotics, philosophy, knowledge management... Computer scientists Experts in information systems, information management, web technologies, personalisation, ubiquitous technologies … The London Knowledge Lab

34 19 January 2007 LKL Research Themes Research at the London Knowledge Lab consists mainly of externally funded projects by EU, EPSRC, ESRC, AHRB, BBSRC, JISC, Wellcome Trust – currently about 25 projects. Four broad themes guide our work and inform our research strategy: new forms of knowledge turning information into knowledge the changing cultures of new media creating empowering technologies for formal and informal learning

35 19 January 2007 Turning Information Into Knowledge The need to cope with ubiquitous, complex, incomplete and inconsistent information is pervasive in our societies How can people benefit from this information in their learning, working and social lives ? What new techniques are necessary for managing, accessing, integrating and personalising such information ? How to design and build tools that help people to understand such information and generate new knowledge from it ?

Download ppt "19 January 2007 Data Quality Meeting Alex Poulovassilis."

Similar presentations

Ads by Google