Amol Deshpande, University of Maryland Zachary G. Ives, University of Pennsylvania Vijayshankar Raman, IBM Almaden Research Center Thanks to Joseph M.

Slides:



Advertisements
Similar presentations
Query Optimization Reserves Sailors sid=sid bid=100 rating > 5 sname (Simple Nested Loops) Imperative query execution plan: SELECT S.sname FROM Reserves.
Advertisements

Analysis of : Operator Scheduling in a Data Stream Manager CS561 – Advanced Database Systems By Eric Bloom.
Overcoming Limitations of Sampling for Agrregation Queries Surajit ChaudhuriMicrosoft Research Gautam DasMicrosoft Research Mayur DatarStanford University.
Query Optimization CS634 Lecture 12, Mar 12, 2014 Slides based on “Database Management Systems” 3 rd ed, Ramakrishnan and Gehrke.
Distributed DBMS© M. T. Özsu & P. Valduriez Ch.6/1 Outline Introduction Background Distributed Database Design Database Integration Semantic Data Control.
Query Execution, Concluded Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems November 18, 2003 Some slide content may.
Chapter 11 Indexing and Hashing (2) Yonsei University 2 nd Semester, 2013 Sanghyun Park.
Adaptive Query Processing with Eddies Amol Deshpande University of Maryland.
Database Management Systems, R. Ramakrishnan and Johannes Gehrke1 Evaluation of Relational Operations: Other Techniques Chapter 12, Part B.
Outline SQL Server Optimizer  Enumeration architecture  Search space: flexibility/extensibility  Cost and statistics Automatic Physical Tuning  Database.
Planning under Uncertainty
Query Evaluation. An SQL query and its RA equiv. Employees (sin INT, ename VARCHAR(20), rating INT, age REAL) Maintenances (sin INT, planeId INT, day.
Query Evaluation. SQL to ERA SQL queries are translated into extended relational algebra. Query evaluation plans are represented as trees of relational.
1 Relational Query Optimization Module 5, Lecture 2.
IntroductionAQP FamiliesComparisonNew IdeasConclusions Adaptive Query Processing in the Looking Glass Shivnath Babu (Stanford Univ.) Pedro Bizarro (Univ.
Paper by: A. Balmin, T. Eliaz, J. Hornibrook, L. Lim, G. M. Lohman, D. Simmen, M. Wang, C. Zhang Slides and Presentation By: Justin Weaver.
Evaluating Window Joins Over Unbounded Streams By Nishant Mehta and Abhishek Kumar.
1 Continuously Adaptive Continuous Queries (CACQ) over Streams Samuel Madden, Mehul Shah, Joseph Hellerstein, and Vijayshankar Raman Presented by: Bhuvan.
Adaptive Query Processing with Eddies Amol Deshpande University of Maryland.
Exploiting Correlated Attributes in Acquisitional Query Processing Amol Deshpande University of Maryland Joint work with Carlos Sam
Freddies: DHT-Based Adaptive Query Processing via Federated Eddies Ryan Huebsch Shawn Jeffery CS Peer-to-Peer Systems 12/9/03.
1 Anna Östlin Pagh and Rasmus Pagh IT University of Copenhagen Advanced Database Technology March 25, 2004 QUERY COMPILATION II Lecture based on [GUW,
Flow Algorithms for Two Pipelined Filtering Problems Anne Condon, University of British Columbia Amol Deshpande, University of Maryland Lisa Hellerstein,
Query Optimization 3 Cost Estimation R&G, Chapters 12, 13, 14 Lecture 15.
16.5 Introduction to Cost- based plan selection Amith KC Student Id: 109.
Adapted from a Tutorial given at VLDB 2007 by: Amol Deshpande, University of Maryland Zachary G. Ives, University of Pennsylvania Vijayshankar Raman, IBM.
Adaptive Query Processing Amol Deshpande, University of Maryland Joseph M. Hellerstein, University of California, Berkeley Vijayshankar Raman, IBM Almaden.
Query Optimization Overview Zachary G. Ives University of Pennsylvania CIS 550 – Database & Information Systems December 2, 2004 Some slide content derived.
Adaptive Query Processing Adapted from a Tutorial given at SIGMOD 2006 by: Amol Deshpande, University of Maryland Joseph M. Hellerstein, University of.
Query Processing Presented by Aung S. Win.
Query Optimization Allison Griffin. Importance of Optimization Time is money Queries are faster Helps everyone who uses the server Solution to speed lies.
Database Management Systems, R. Ramakrishnan and J. Gehrke1 Query Evaluation Chapter 12: Overview.
Access Path Selection in a Relational Database Management System Selinger et al.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Architectural Design l Establishing the overall structure of a software system.
Querying Large Databases Rukmini Kaushik. Purpose Research for efficient algorithms and software architectures of query engines.
Scheduling policies for real- time embedded systems.
Query Processing. Steps in Query Processing Validate and translate the query –Good syntax. –All referenced relations exist. –Translate the SQL to relational.
CPSC 404, Laks V.S. Lakshmanan1 Evaluation of Relational Operations: Other Operations Chapter 14 Ramakrishnan & Gehrke (Sections ; )
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Architectural Design l Establishing the overall structure of a software system.
© ETH Zürich Eric Lo ETH Zurich a joint work with Carsten Binnig (U of Heidelberg), Donald Kossmann (ETH Zurich), Tamer Ozsu (U of Waterloo) and Peter.
Lecture 15- Parallel Databases (continued) Advanced Databases Masood Niazi Torshiz Islamic Azad University- Mashhad Branch
Joseph M. Hellerstein Peter J. Haas Helen J. Wang Presented by: Calvin R Noronha ( ) Deepak Anand ( ) By:
Robust Query Processing through Progressive Optimization SIGMOD 2004 Volker Markl, Vijayshankar Raman, David Simmen, Guy Lohman, Hamid Pirahesh, Miso Cilimdzic.
Introduction to Query Optimization, R. Ramakrishnan and J. Gehrke 1 Introduction to Query Optimization Chapter 13.
Eddies: Continuously Adaptive Query Processing Ross Rosemark.
CS4432: Database Systems II Query Processing- Part 2.
Adaptive Query Processing Part II Zachary G. Ives University of Pennsylvania CIS 650 – Database & Information Systems October 23, 2008 Slides co-authored.
Query Processing CS 405G Introduction to Database Systems.
Adaptive Ordering of Pipelined Stream Filters Babu, Motwani, Munagala, Nishizawa, and Widom SIGMOD 2004 Jun 13-18, 2004 presented by Joshua Lee Mingzhu.
CS 440 Database Management Systems Lecture 5: Query Processing 1.
CS 540 Database Management Systems
1 Overview of Query Evaluation Chapter Outline  Query Optimization Overview  Algorithm for Relational Operations.
Adaptive Query Processing Part I
Parallel Databases.
Applying Control Theory to Stream Processing Systems
Proactive Re-optimization
CS222P: Principles of Data Management Lecture #15 Query Optimization (System-R) Instructor: Chen Li.
Database Management Systems (CS 564)
CSCI1600: Embedded and Real Time Software
CPU Scheduling G.Anuradha
Database Query Execution
Outline Introduction Background Distributed DBMS Architecture
(A Research Proposal for Optimizing DBMS on CMP)
Overview of Query Evaluation
Evaluation of Relational Operations: Other Techniques
Self-organizing Tuple Reconstruction in Column-stores
PSoup: A System for streaming queries over streaming data
CS222: Principles of Data Management Lecture #15 Query Optimization (System-R) Instructor: Chen Li.
CSCI1600: Embedded and Real Time Software
Presentation transcript:

Amol Deshpande, University of Maryland Zachary G. Ives, University of Pennsylvania Vijayshankar Raman, IBM Almaden Research Center Thanks to Joseph M. Hellerstein, University of California, Berkeley Adaptive Query Processing

Query Processing: Adapting to the World Data independence facilitates modern DBMS technology –Separates specification (“what”) from implementation (“how”) –Optimizer maps declarative query  algebraic operations Platforms, conditions are constantly changing: Query processing adapts implementation to runtime conditions –Static applications  dynamic environments d app dt  d env dt

Dynamic Programming + Pruning Heuristics Query Optimization and Processing (As Established in System R [SAC+’79]) > UPDATE STATISTICS ❚ cardinalities index lo/hi key > SELECT * FROM Professor P, Course C, Student S WHERE P.pid = C.pid AND S.sid = C.sid ❚ ProfessorCourseStudent

Traditional Optimization Is Breaking In traditional settings: –Queries over many tables –Unreliability of traditional cost estimation –Success & maturity make problems more apparent, critical In new environments: –e.g. data integration, web services, streams, P2P, sensor nets, hosting –Unknown and dynamic characteristics for data and runtime –Increasingly aggressive sharing of resources and computation –Interactivity in query processing Note two distinct themes lead to the same conclusion: –Unknowns: even static properties often unknown in new environments and often unknowable a priori –Dynamics: can be very high Motivates intra-query adaptivity denv dt

A Call for Greater Adaptivity System R adapted query processing as stats were updated –Measurement/analysis: periodic –Planning/actuation: once per query –Improved thru the late 90s (see [Graefe ’93] [Chaudhuri ’98]) Better measurement, models, search strategies INGRES adapted execution many times per query –Each tuple could join with relations in a different order –Different plan space, overheads, frequency of adaptivity Didn’t match applications & performance at that time Recent work considers adaptivity in new contexts

Observations on 20thC Systems  Both INGRES & System R used adaptive query processing –To achieve data independence  They “adapt” at different timescales –Ingres goes ‘round the whole loop many times per query –System R decouples parts of loop, and is coarser-grained measurement/modeling: periodic planning/actuation: once per query  Query Post-Mortem reveals different relational plan spaces –System R is direct: each query mapped to a single relational algebra stmt –Ingres’ decision space generates a union of plans over horizontal partitions this “super-plan” not materialized -- recomputed via FindMin  Both have zero-overhead actuation –Never waste query processing work

Tangentially Related Work  An incomplete list!!!  Competitive Optimization [Antoshenkov93] –Choose multiple plans, run in parallel for a time, let the most promising finish 1x feedback: execution doesn’t affect planning after the competition  Parametric Query Optimization [INSS92, CG94, etc.] –Given partial stats in advance. Do some planning and prune the space. At runtime, given the rest of statistics, quickly finish planning. Changes interaction of Measure/Model and Planning No feedback whatsoever, so nothing to adapt to!  “Self-Tuning”/“Autonomic” Optimizers [CR94, CN97, BC02, etc.] –Measure query execution (e.g. cardinalities, etc.) Enhances measurement, on its own doesn’t change the loop –Consider building non-existent physical access paths (e.g. indexes, partitions) In some senses a separate loop – adaptive database design Longer timescales

Tangentially Related Work II  Robust Query Optimization [CHG02, MRS+04, BC05, etc.] –Goals: Pick plans that remain predictable across wide ranges of scenarios Pick least expected cost plan –Changes cost function for planning, not necessarily the loop. If such functions are used in adaptive schemes, less fluctuation [MRS+04] –Hence fewer adaptations, less adaptation overhead  Adaptive query operators [NKT88, KNT89, PCL93a, PCL93b] –E.g. memory-adaptive sort and hash-join –Doesn’t address whole-query optimization problems –However, if used with AQP, can result in complex feedback loops Especially if their actions affect each other’s models!

Extended Topics in Adaptive QP  An incomplete list!!  Parallelism & Distribution –River [A-D03] –FLuX [SHCF03, SHB04] –Distributed eddies [TD03]  Data Streams –Adaptive load shedding –Shared query processing

Tutorial Focus By necessity, we will cover only a piece of the picture here –Intra-query adaptivity: autonomic / self-tuning optimization [CR’94, CN’97, BC’02, …] robust / least expected cost optimization [CHG’02, MRS+’04, BC’05,...] parametric or competitive optimization [A’93, INSS’92, CG’94, …] adaptive operators, e.g., memory adaptive sort & hash join [NKT’88, KNT’89, PCL’93a, PCL’93b,…] –Conventional relations, rather than streams –Single-site, single query computation  For more depth, see our survey in now Publishers’ Foundations and Trends in Databases, Vol. 1 No. 1

Tutorial Outline  Motivation  Non-pipelined execution  Pipelined execution – Selection ordering – Multi-way join queries  Putting it all in context  Recap/open problems

Low-Overhead Adaptivity: Non-pipelined Execution

Late Binding; Staged Execution Materialization points make natural decision points where the next stage can be changed with little cost: –Re-run optimizer at each point to get the next stage –Choose among precomputed set of plans – parametric query optimization [INSS’92, CG’94, …] A R NLJ sort C B MJ sort Normal execution: pipelines separated by materialization points e.g., at a sort, GROUP BY, etc. materialization point

Mid-query Reoptimization [KD’98,MRS+04] Choose checkpoints at which to monitor cardinalities Balance overhead and opportunities for switching plans If actual cardinality is too different from estimated, Avoid unnecessary plan re-optimization (where the plan doesn’t change) Re-optimize to switch to a new plan Try to maintain previous computation during plan switching  Most widely studied technique: -- Federated systems (InterViso 90, MOOD 96), Red Brick, Query scrambling (96), Mid-query re-optimization (98), Progressive Optimization (04), Proactive Reoptimization (05), … Where? How? When? A R NLJ B C HJ MJ sort C B MJ sort Challenges

Where to Place Checkpoints? Lazy checkpoints: placed above materialization points –No work need be wasted if we switch plans here Eager checkpoints: can be placed anywhere –May have to discard some partially computed results –Useful where optimizer estimates have high uncertainty A C B R MJ NLJ MJ sort More checkpoints  more opportunities for switching plans Overhead of (simple) monitoring is small [SLMK’01] Consideration: it is easier to switch plans at some checkpoints than others sort Lazy Eager

When to Re-optimize?  Suppose actual cardinality is different from estimates: how high a difference should trigger a re-optimization?  Idea: do not re-optimize if current plan is still the best 1.Heuristics-based [KD’98] : e.g., re-optimize < time to finish execution 2.Validity range [MRS+04] : precomputed range of a parameter (e.g., a cardinality) within which plan is optimal –Place eager checkpoints where the validity range is narrow –Re-optimize if value falls outside this range –Variation: bounding boxes [BBD’05]

How to Reoptimize Getting a better plan: –Plug in actual cardinality information acquired during this query (as possibly histograms), and re-run the optimizer Reusing work when switching to the better plan: –Treat fully computed intermediate results as materialized views Everything that is under a materialization point –Note: It is optional for the optimizer to use these in the new plan  Other approaches are possible (e.g., query scrambling [UFA’98] )

Pipelined Execution

Adapting Pipelined Queries Adapting pipelined execution is often necessary: –Too few materializations in today’s systems –Long-running queries –Wide-area data sources –Potentially endless data streams The tricky issues: –Some results may have been delivered to the user Ensuring correctness non-trivial –Database operators build up state Must reason about it during adaptation May need to manipulate state

Adapting Pipelined Queries We discuss three subclasses of the problem: –Selection ordering (stateless) Very good analytical and theoretical results Increasingly important in web querying, streams, sensornets Certain classes of join queries reduce to them –Select-project-join queries (stateful) History-independent execution –Operator state largely independent of execution history  Execution decisions for a tuple independent of prior tuples History-dependent execution –Operator state depends on execution history –Must reason about the state during adaptation

Pipelined Execution Part I: Adaptive Selection Ordering

Adaptive Selection Ordering Complex predicates on single relations common –e.g., on an employee relation: ((salary > ) AND (status = 2)) OR ((salary between and ) AND (age < 30) AND (status = 1)) OR … Selection ordering problem: Decide the order in which to evaluate the individual predicates against the tuples We focus on conjunctive predicates (containing only AND’s) Example Query select * from R where R.a = 10 and R.b < 20 and R.c like ‘%name%’;

Basics: Static Optimization Find a single order of the selections to be used for all tuples Query Query plans considered R.a = 10 R.b < 20 Rresult R.c like … R.b < 20 R.c like … Rresult R.a = 10 3! = 6 distinct plans possible select * from R where R.a = 10 and R.b < 20 and R.c like ‘%name%’;

Static Optimization Cost metric: CPU instructions Computing the cost of a plan –Need to know the costs and the selectivities of the predicates R.a = 10 R.b < 20 Rresult R.c like … cost(plan) = |R| * (c1 + s1 * c2 + s1 * s2 * c3) R1 R2 R3 costs c1 c2 c3 selectivities s1 s2 s3 cost per c1 + s1 c2 + s1 s2 c3 tuple Independence assumption

Static Optimization Rank ordering algorithm for independent selections [IK’84] –Apply the predicates in the decreasing order of rank: (1 – s) / c where s = selectivity, c = cost For correlated selections: –NP-hard under several different formulations e.g. when given a random sample of the relation –Greedy algorithm, shown to be 4-approximate [BMMNW’04]: Apply the selection with the highest (1 - s)/c Compute the selectivities of remaining selections over the result –Conditional selectivities Repeat Conditional Plans ? [DGHM’05]

Adaptive Greedy [BMMNW’04] Context: Pipelined query plans over streaming data Example: R.a = 10 R.b < 20 R.c like … Initial estimated selectivities Costs 1 unit Three independent predicates R.a = 10 R.b < 20 Rresult R.c like … R1 R2 R3 Optimal execution plan orders by selectivities (because costs are identical)

Adaptive Greedy [BMMNW’04] 1.Monitor the selectivities over recent past (sliding window) 2.Re-optimize if the predicates not ordered by selectivities R.a = 10 R.b < 20 Rresult R.c like … R1 R2 R3 R sample Randomly sample R.a = 10 R.b < 20 R.c like … estimate selectivities of the predicates over the tuples of the profile Reoptimizer IF the current plan not optimal w.r.t. these new selectivities THEN reoptimize using the Profile Profile

Adaptive Greedy [BMMNW’04] Correlated Selections –Must monitor conditional selectivities monitor selectivities sel(R.a = 10), sel(R.b < 20), sel(R.c …) monitor conditional selectivities sel(R.b < 20 | R.a = 10) sel(R.c like … | R.a = 10) sel(R.c like … | R.a = 10 and R.b < 20) R.a = 10 R.b < 20 Rresult R.c like … R1 R2 R3 R sample Randomly sample R.a = 10 R.b < 20 R.c like … (Profile) Reoptimizer Uses conditional selectivities to detect violations Uses the profile to reoptimize O(n 2 ) selectivities need to be monitored

Adaptive Greedy [BMMNW’04] Advantages: –Can adapt very rapidly –Handles correlations –Theoretical guarantees on performance [MBMW’05] Not known for any other AQP algorithms Disadvantages: –May have high runtime overheads Profile maintenance –Must evaluate a (random) fraction of tuples against all operators Detecting optimality violations Reoptimization cost –Can require multiple passes over the profile

Eddies [AH’00] Query processing as routing of tuples through operators Pipelined query execution using an eddy An eddy operator Intercepts tuples from sources and output tuples from operators Executes query by routing source tuples through operators A traditional pipelined query plan R.a = 10 R.b < 20 Rresult R.c like … R1 R2 R3 Eddy R result R.a = 10 R.c like … R.b < 20 Encapsulates all aspects of adaptivity in a “standard” dataflow operator: measure, model, plan and actuate.

Eddies [AH’00] abc… 1510AnameA… An R Tuple: r1 r1 Eddy R result R.a = 10 R.c like … R.b < 20

ready bit i : 1  operator i can be applied 0  operator i can’t be applied Eddies [AH’00] abc…readydone 1510AnameA… An R Tuple: r1 r1 Operator 1 Operator 2 Operator 3 Eddy R result R.a = 10 R.c like … R.b < 20

done bit i : 1  operator i has been applied 0  operator i hasn’t been applied Eddies [AH’00] abc…readydone 1510AnameA… An R Tuple: r1 r1 Operator 1 Operator 2 Operator 3 Eddy R result R.a = 10 R.c like … R.b < 20

Eddies [AH’00] abc…readydone 1510AnameA… An R Tuple: r1 r1 Operator 1 Operator 2 Operator 3 Used to decide validity and need of applying operators Eddy R result R.a = 10 R.c like … R.b < 20

Eddies [AH’00] abc…readydone 1510AnameA… An R Tuple: r1 r1 Operator 1 Operator 2 Operator 3 satisfied r1 abc…readydone 1510AnameA… r1 not satisfied eddy looks at the next tuple For a query with only selections, ready = complement(done) Eddy R result R.a = 10 R.c like … R.b < 20

Eddies [AH’00] abc… 1015AnameA… An R Tuple: r2 Operator 1 Operator 2 Operator 3 r2 Eddy R result R.a = 10 R.c like … R.b < 20 satisfied

Eddies [AH’00] abc…readydone 1015AnameA… An R Tuple: r2 Operator 1 Operator 2 Operator 3 r2 if done = 111, send to output r2 Eddy R result R.a = 10 R.c like … R.b < 20 satisfied

Eddies [AH’00] Adapting order is easy –Just change the operators to which tuples are sent –Can be done on a per-tuple basis –Can be done in the middle of tuple’s “pipeline” How are the routing decisions made? Using a routing policy Operator 1 Operator 2 Operator 3 Eddy R result R.a = 10 R.c like … R.b < 20

Routing Policy 1: Non-adaptive  Simulating a single static order –E.g. operator 1, then operator 2, then operator 3 Routing policy: if done = 000  route to  route to  route to 3 table lookups  very efficient Operator 1 Operator 2 Operator 3 Eddy R result R.a = 10 R.c like … R.b < 20

Overhead of Routing  PostgreSQL implementation of eddies using bitset lookups [Telegraph Project]  Queries with 3 selections, of varying cost –Routing policy uses a single static order, i.e., no adaptation

Routing Policy 2: Deterministic  Monitor costs and selectivities continuously  Reoptimize periodically using KBZ Statistics Maintained: Costs of operators Selectivities of operators Routing policy: Use a single order for a batch of tuples Periodically apply KBZ Operator 1 Operator 2 Operator 3 Eddy R result R.a = 10 R.c like … R.b < 20 Can use the A-Greedy policy for correlated predicates

Overhead of Routing and Reoptimization  Adaptation using batching –Reoptimized every X tuples using monitored selectivities –Identical selectivities throughout  experiment measures only the overhead

Routing Policy 3: Lottery Scheduling  Originally suggested routing policy [AH’00]  Applicable when each operator runs in a separate “thread” –Can also be done single-threaded, via an event-driven query executor  Uses two easily obtainable pieces of information for making routing decisions: –Busy/idle status of operators –Tickets per operator Operator 1 Operator 2 Operator 3 Eddy R result R.a = 10 R.c like … R.b < 20

Routing Policy 3: Lottery Scheduling  Routing decisions based on busy/idle status of operators Rule: IF operator busy, THEN do not route more tuples to it Rationale: Every thread gets equal time SO IF an operator is busy, THEN its cost is perhaps very high Operator 1 Operator 2 Operator 3 Eddy R result R.a = 10 R.c like … R.b < 20 BUSY IDLE

Routing Policy 3: Lottery Scheduling  Routing decisions based on tickets Rules: 1. Route a new tuple randomly weighted according to the number of tickets tickets(O1) = 10 tickets(O2) = 70 tickets(O3) = 20 Will be routed to: O1 w.p. 0.1 O2 w.p. 0.7 O3 w.p. 0.2 Operator 1 Operator 2 Operator 3 Eddy result R.a = 10 R.c like … R.b < 20 r

Routing Policy 3: Lottery Scheduling  Routing decisions based on tickets Rules: 1. Route a new tuple randomly weighted according to the number of tickets tickets(O1) = 10 tickets(O2) = 70 tickets(O3) = 20 r Operator 1 Operator 2 Operator 3 Eddy result R.a = 10 R.c like … R.b < 20

Routing Policy 3: Lottery Scheduling  Routing decisions based on tickets Rules: 1. Route a new tuple randomly weighted according to the number of tickets 2. route a tuple to an operator O i tickets(O i ) ++; Operator 1 Operator 2 Operator 3 Eddy result R.a = 10 R.c like … R.b < 20 tickets(O1) = 11 tickets(O2) = 70 tickets(O3) = 20

Routing Policy 3: Lottery Scheduling  Routing decisions based on tickets r Rules: 1. Route a new tuple randomly weighted according to the number of tickets 2. route a tuple to an operator O i tickets(O i ) ++; 3. O i returns a tuple to eddy tickets(O i ) --; Operator 1 Operator 2 Operator 3 Eddy result R.a = 10 R.c like … R.b < 20 tickets(O1) = 11 tickets(O2) = 70 tickets(O3) = 20

Routing Policy 3: Lottery Scheduling  Routing decisions based on tickets r Rules: 1. Route a new tuple randomly weighted according to the number of tickets 2. route a tuple to an operator O i tickets(O i ) ++; 3. O i returns a tuple to eddy tickets(O i ) --; Operator 1 Operator 2 Operator 3 Eddy result R.a = 10 R.c like … R.b < 20 tickets(O1) = 10 tickets(O2) = 70 tickets(O3) = 20 Will be routed to: O2 w.p O3 w.p

Routing Policy 3: Lottery Scheduling  Routing decisions based on tickets Rationale: Tickets(O i ) roughly corresponds to (1 - selectivity(O i )) So more tuples are routed to the more selective operators Rules: 1. Route a new tuple randomly weighted according to the number of tickets 2. route a tuple to an operator O i tickets(O i ) ++; 3. O i returns a tuple to eddy tickets(O i ) --; Operator 1 Operator 2 Operator 3 Eddy result R.a = 10 R.c like … R.b < 20 tickets(O1) = 10 tickets(O2) = 70 tickets(O3) = 20

Routing Policy 3: Lottery Scheduling  Effect of the combined lottery scheduling policy: –Low cost operators get more tuples –Highly selective operators get more tuples –Some tuples are randomly, knowingly routed according to sub-optimal orders To explore Necessary to detect selectivity changes over time

Routing Policy 4: Content-based Routing  Routing decisions made based on the values of the attributes [BBDW’05]  Also called “conditional planning” in a static setting [DGHM’05]  Less useful unless the predicates are expensive –At the least, more expensive than r.d > 100 Example Eddy notices: R.d > 100  sel(op1) > sel(op2) & R.d < 100  sel(op1) < sel(op2) Routing decisions for new tuple “r”: IF (r.d > 100): Route to op1 first w.h.p ELSE Route to op2 first w.h.p Operator 1 Operator 2 Eddy result Expensive predicates

Routing Policies that Have Been Studied Deterministic [D03] –Monitor costs & selectivities continuously –Re-optimize periodically using rank ordering (or A-Greedy for correlated predicates) Lottery scheduling [AH00] –Each operator runs in thread with an input queue –“Tickets” assigned according to tuples input / output –Route tuple to next eligible operator with room in queue, based on number of “tickets” and “backpressure” Content-based routing [BBDW05] –Different routes for different plans based on attribute values

Pipelined Execution Part II: Adaptive Join Processing Title slide

Adaptive Join Processing: Outline  Single streaming relation –Left-deep pipelined plans  Multiple streaming relations –Execution strategies for multi-way joins –History-independent execution –History-dependent execution

Left-Deep Pipelined Plans Simplest method of joining tables –Pick a driver table (R). Call the rest driven tables –Pick access methods (AMs) on the driven tables (scan, hash, or index) –Order the driven tables –Flow R tuples through the driven tables For each r  R do: look for matches for r in A; for each match a do: look for matches for in B; … R B NLJ C A

Adapting a Left-deep Pipelined Plan Simplest method of joining tables –Pick a driver table (R). Call the rest driven tables –Pick access methods (AMs) on the driven tables –Order the driven tables –Flow R tuples through the driven tables For each r  R do: look for matches for r in A; for each match a do: look for matches for in B; … Almost identical to selection ordering R B NLJ C A

Adapting the Join Order  Let c i = cost/lookup into i’th driven table, s i = fanout of the lookup  As with selection, cost = |R| x (c 1 + s 1 c 2 + s 1 s 2 c 3 )  Caveats: –Fanouts s 1,s 2,… can be > 1 –Precedence constraints –Caching issues  Can use rank ordering, A-greedy for adaptation (subject to the caveats) R B NLJ C A R C B A (c 1, s 1 )(c 2, s 2 )(c 3, s 3 )

Adapting a Left-deep Pipelined Plan Simplest method of joining tables –Pick a driver table (R). Call the rest driven tables –Pick access methods (AMs) on the driven tables –Order the driven tables –Flow R tuples through the driven tables For each r  R do: look for matches for r in A; for each match a do: look for matches for in B; … R B NLJ C A ?

Adapting a Left-deep Pipelined Plan Key issue: Duplicates Adapting the choice of driver table [L+07] Carefully use indexes to achieve this Adapting the choice of access methods –Static optimization: explore all possibilities and pick best –Adaptive: Run multiple plans in parallel for a while, and then pick one and discard the rest [Antoshenkov’ 96] Cannot easily explore combinatorial options SteMs [RDH’03] handle both as well R B NLJ C A

Adaptive Join Processing: Outline  Single streaming relation –Left-deep pipelined plans  Multiple streaming relations –Execution strategies for multi-way joins –History-independent execution MJoins SteMs –History-dependent execution Eddies with joins Corrective query processing

Example Join Query & Database NameLevel JoeJunior JenSenior NameCourse JoeCS1 JenCS2 CourseInstructor CS2Smith select * from students, enrolled, courses where students.name = enrolled.name and enrolled.course = courses.course Students Enrolled NameLevelCourse JoeJuniorCS1 JenSeniorCS2 Enrolled Courses Students Enrolled Courses NameLevelCourseInstructor JenSeniorCS2Smith

Symmetric/Pipelined Hash Join [RS86, WA91] NameLevel JenSenior JoeJunior NameCourse JoeCS1 JenCS2 JoeCS2 select * from students, enrolled where students.name = enrolled.name NameLevelCourse JenSeniorCS2 JoeJuniorCS1 JoeSeniorCS2 Students Enrolled  Simultaneously builds and probes hash tables on both sides  Widely used: –adaptive query processing –stream joins –online aggregation –…–…  Naïve version degrades to NLJ once memory runs out –Quadratic time complexity –memory needed = sum of inputs  Improved by XJoins [UF 00], Tukwila DPJ [IFFLW 99]

Multi-way Pipelined Joins over Streaming Relations Three alternatives – Using binary join operators –Using a single n-ary join operator (MJoin) [VNB’03] –Using unary operators [RDH’03]

NameLevel JenSenior JoeJunior NameCourse JoeCS1 JenCS2 Enrolled HashTable E.Name HashTable S.Name Students CourseInstructor CS2Smith HashTable E.Course HashTable C.course Courses NameLevelCourse JenSeniorCS2 JoeJuniorCS1 NameLevelCourseInstructor JenSeniorCS2Smith Materialized state that depends on the query plan used History-dependent ! JenSeniorCS2

Multi-way Pipelined Joins over Streaming Relations Three alternatives – Using binary join operators  History-dependent execution  Hard to reason about the impact of adaptation  May need to migrate the state when changing plans –Using a single n-ary join operator (MJoin) [VNB’03] –Using unary operators [RDH’03]

NameCourse JoeCS1 JenCS2 NameLevel JoeJunior JenSenior Students HashTable S.Name HashTable E.Name Enrolled NameLevelCourseInstructor JenSeniorCS2Smith NameCourse JoeCS1 JenCS2 HashTable E.Course HashTable C.course Courses Probing Sequences Students tuple: Enrolled, then Courses Enrolled tuple: Students, then Courses Courses tuple: Enrolled, then Students Probe Hash tables contain all tuples that arrived so far Irrespective of the probing sequences used History-independent execution ! CourseInstructor CS2Smith JenCS2Smith JenCS2Senior

Multi-way Pipelined Joins over Streaming Relations Three alternatives –Using binary join operators  History-dependent execution –Using a single n-ary join operator (MJoin) [VNB’03]  History-independent execution  Well-defined state easy to reason about –Especially in data stream processing  Performance may be suboptimal [DH’04] –No intermediate tuples stored  need to recompute –Using unary operators [RDH’03]

Breaking the Atomicity of Probes and Builds in an N-ary Join [RDH’03] NameLevel JenSenior JoeJunior NameCourse JoeCS1 JenCS2 JoeCS2 Students HashTable S.Name HashTable E.Name Enrolled NameLevel JenSenior JoeJunior HashTable E.Course NameLevel JenSenior JoeJunior HashTable C.course Courses Eddy SteM S SteM E SteM C

Multi-way Pipelined Joins over Streaming Relations Three alternatives – Using binary join operators  History-dependent execution –Using a single n-ary join operator (MJoin) [VNB’03]  History-independent execution  Well-defined state easy to reason about –Especially in data stream processing  Performance may be suboptimal [DH’04] –No intermediate tuples stored  need to recompute –Using unary operators [RDH’03]  Similar to MJoins, but enables additional adaptation

Adaptive Join Processing: Outline  Single streaming relation –Left-deep pipelined plans  Multiple streaming relations –Execution strategies for multi-way joins –History-independent execution MJoins SteMs –History-dependent execution Eddies with joins Corrective query processing

MJoins [VNB’03] Choosing probing sequences –For each relation, use a left-deep pipelined plan (based on hash indexes) –Can use selection ordering algorithms Independently for each relation Adapting MJoins –Adapt each probing sequence independently e.g., StreaMon [BW’01] used A-Greedy for this purpose A-Caching [BMWM’05] –Maintain intermediate caches to avoid recomputation –Alleviates some of the performance concerns

State Modules (SteMs) [RDH’03] SteM is an abstraction of a unary operator –Encapsulates the state, access methods and the operations on a single relation By adapting the routing between SteMs, we can –Adapt the join ordering (as before) –Adapt access method choices –Adapt join algorithms Hybridized join algorithms –e.g. on memory overflow, switch from hash join  index join Much larger space of join algorithms –Adapt join spanning trees Also useful for sharing state across joins –Advantageous for continuous queries [MSHR’02, CF’03]

Adaptive Join Processing: Outline  Single streaming relation –Left-deep pipelined plans  Multiple streaming relations –Execution strategies for multi-way joins –History-independent execution MJoins SteMs –History-dependent execution Eddies with binary joins –State management using STAIRs Corrective query processing

Eddies with Binary Joins [AH’00] StudentsEnrolled Output Courses E C S E Eddy S E C S E E C Output S.Name like “..” s1 For correctness, must obey routing constraints !!

Eddies with Binary Joins [AH’00] StudentsEnrolled Output Courses E C S E Eddy S E C S E E C Output S.Name like “..” e1 For correctness, must obey routing constraints !!

Eddies with Binary Joins [AH’00] StudentsEnrolled Output Courses E C S E Eddy S E C S E E C Output S.Name like “..” e1c1 For correctness, must obey routing constraints !! Use some form of tuple-lineage

Eddies with Binary Joins [AH’00] StudentsEnrolled Output Courses E C S E Eddy S E C S E E C Output S.Name like “..” Can use any join algorithms But, pipelined operators preferred Provide quick feedback

Eddies with Symmetric Hash Joins Eddy S E C Output S E HashTable S.Name HashTable E.Name E C HashTable E.Course HashTable C.Course JoeJr JenSr CS2Smith JoeCS1 JoeJrCS1 JenCS2 JenCS2Smith

Burden of Routing History [DH’04] Eddy S E C Output S E HashTable S.Name HashTable E.Name E C HashTable E.Course HashTable C.Course JoeJr JenSr CS2Smith JoeCS1 JoeJrCS1 JenCS2 JenCS2Smith As a result of routing decisions, state gets embedded inside the operators History-dependent execution !!

Modifying State: STAIRs [DH’04] Observation: –Changing the operator ordering not sufficient –Must allow manipulation of state New operator: STAIR –Expose join state to the eddy By splitting a join into two halves –Provide state management primitives That guarantee correctness of execution Able to lift the burden of history –Enable many other adaptation opportunities e.g. adapting spanning trees, selective caching, pre- computation

Recap: Eddies with Binary Joins Routing constraints enforced using tuple-level lineage Must choose access methods, join spanning tree beforehand –SteMs relax this restriction [RDH’03] The operator state makes the behavior unpredictable –Unless only one streaming relation Routing policies explored are same as for selections –Can tune policy for interactivity metric [RH’02]

Adaptive Join Processing: Outline  Single streaming relation –Left-deep pipelined plans  Multiple streaming relations –Execution strategies for multi-way joins –History-independent execution MJoins SteMs –History-dependent execution Eddies with binary joins –State management using STAIRs Corrective query processing

F(fid,from,to,when) F 0 T(ssn,flight) T 0 C(parent,num) C 0 F 1 T 1 C 1 Carefully Managing State: Corrective Query Processing (CQP) [I’02,IHW’04] Focus on stateful queries: –Join cost grows over time Early: few tuples join Late: may get x-products –Group-by may not produce output until end Consider long-term cost, switch in mid-pipeline –Optimize with cost model –Use pipelining operators –Measure cardinalities, compare to estimates –Replan when different –Execute on new data inputs Stitch-up phase computes cross- phase results Group[fid,from] max(num) F 0 T 0 C 0 Plan0 1 C T 1 F 1 1 Shared Group- by Operator U T 0 C 0 T 1 C 1 F 0 F 1 T 0 C 0 Except T 0 C 0 F 0 T 1 C 1 F 1 Stitch-up Plan SELECT fid, from, max(num) FROM F, T, C WHERE fid=flight AND parent=ssn GROUP BY fid, from

CQP Discussion Each plan operates on a horizontal partition: Clean algebraic interpretation! Easy to extend to more complex queries –Aggregation, grouping, subqueries, etc. Separates two factors, conservatively creates state: –Scheduling is handled by pipelined operators –CQP chooses plans using long-term cost estimation –Postpones cross-phase results to final phase Assumes settings where computation cost, state are the bottlenecks –Contrast with STAIRS, which move state around once it’s created!

Putting it all in Context

How Do We Understand the Relationship between Techniques? Several different axes are useful: –When are the techniques applicable? Adaptive selection ordering History-independent joins History-dependent joins –How do they handle the different aspects of adaptivity? –How to EXPLAIN adaptive query plans?

Adaptivity Loop Measure what ? Cardinalities/selectivities, operator costs, resource utilization Measure when ? Continuously (eddies); using a random sample (A-greedy); at materialization points (mid-query reoptimization) Measurement overhead ? Simple counter increments (mid-query) to very high Actuate Plan Analyze Measure

Adaptivity Loop Analyze/replan what decisions ? (Analyze actual vs. estimated selectivities) Evaluate costs of alternatives and switching (keep state in mind) Analyze / replan when ? Periodically; at materializations (mid-query); at conditions (A-greedy) Plan how far ahead ? Next tuple; batch; next stage (staged); possible remainder of plan (CQP) Planning overhead ? Switch stmt (parametric) to dynamic programming (CQP, mid-query) Actuate Measure Plan Analyze

Adaptivity Loop Actuation: How do they switch to the new plan/new routing strategy ? Actuation overhead ? At the end of pipelines  free (mid-query) During pipelines: History-independent  Essentially free (selections, MJoins) History-dependent  May need to migrate state (STAIRs, CAPE) Measure Plan Analyze Actuate

Adaptive Query Processing “Plans”: Post-Mortem Analyses After an adaptive technique has completed, we can explain what it did over time in terms of data partitions and relational algebra e.g., a selection ordering technique may effectively have partitioned the input relation into multiple partitions… … where each partition was run with a different order of application of selection predicates  These analyses highlight understanding how the technique manipulated the query plan –See our survey in now Publishers’ Foundations and Trends in Databases, Vol. 1 No. 1

Research Roundup

Measurement & Models Combining static and runtime measurement Finding the right model granularity / measurement timescale –How often, how heavyweight? Active probing? Dealing with correlation in a tractable way There are clear connections here to: –Online algorithms –Machine learning and control theory Bandit problems Reinforcement learning –Operations research scheduling

Understanding Execution Space Identify the “complete” space of post-mortem executions: –Partitioning –Caching –State migration –Competition & redundant work –Sideways information passing –Distribution / parallelism! What aspects of this space are important? When? –A buried lesson of AQP work: “non-Selingerian” plans can win big! –Can we identify robust plans or strategies? Given this (much!) larger plan space, navigate it efficiently –Especially on-the-fly

Wrap-up Adaptivity is the future (and past!) of query processing Lessons and structure emerging –The adaptivity “loop” and its separable components Relationship between measurement, modeling / planning, actuation –Horizontal partitioning “post-mortems” as a logical framework for understanding/explaining adaptive execution in a post-mortem sense –Selection ordering as a clean “kernel”, and its limitations –The critical and tricky role of state in join processing A lot of science and engineering remain!!!

References  [A-D03] R. Arpaci-Dusseau. Runtime Adaptation in River. ACM TOCS  [AH’00] R. Avnur, J. M. Hellerstein: Eddies: Continuously Adaptive Query Processing SIGMOD Conference 2000:  [Antoshenkov93] G. Antoshenkov: Dynamic Query Optimization in Rdb/VMS. ICDE 1993:  [BBD’05] S. Babu, P. Bizarro, D. J. DeWitt. Proactive Reoptimization. VLDB 2005:  [BBDW’05] P. Bizarro, S. Babu, D. J. DeWitt, J. Widom: Content-Based Routing: Different Plans for Different Data. VLDB 2005:  [BC02] N. Bruno, S. Chaudhuri: Exploiting statistics on query expressions for optimization. SIGMOD Conference 2002:  [BC05] B. Babcock, S. Chaudhuri: Towards a Robust Query Optimizer: A Principled and Practical Approach. SIGMOD Conference 2005:  [BMMNW’04] S. Babu, et al: Adaptive Ordering of Pipelined Stream Filters. SIGMOD Conference 2004:  [CDHW06] Flow Algorithms for Two Pipelined Filter Ordering Problems; Anne Condon, Amol Deshpande, Lisa Hellerstein, and Ning Wu. PODS  [CDY’95] S. Chaudhuri, U. Dayal, T. W. Yan: Join Queries with External Text Sources: Execution and Optimization Techniques. SIGMOD Conference 1995:  [CG94] R. L. Cole, G. Graefe: Optimization of Dynamic Query Evaluation Plans. SIGMOD Conference 1994:  [CF03] S. Chandrasekaran, M. Franklin. Streaming Queries over Streaming Data; VLDB 2003  [CHG02] F. C. Chu, J. Y. Halpern, J. Gehrke: Least Expected Cost Query Optimization: What Can We Expect? PODS 2002:  [CN97] S. Chaudhuri, V. R. Narasayya: An Efficient Cost-Driven Index Selection Tool for Microsoft SQL Server. VLDB 1997:

References (2)  [CR94] C-M Chen, N. Roussopoulos: Adaptive Selectivity Estimation Using Query Feedback. SIGMOD Conference 1994:  [DGHM’05] A. Deshpande, C. Guestrin, W. Hong, S. Madden: Exploiting Correlated Attributes in Acquisitional Query Processing. ICDE 2005:  [DGMH’05] A. Deshpande, et al.: Model-based Approximate Querying in Sensor Networks. In VLDB Journal, 2005  [DH’04] A. Deshpande, J. Hellerstein: Lifting the Burden of History from Adaptive Query Processing. VLDB  [EHJKMW’96] O. Etzioni, et al: Efficient Information Gathering on the Internet. FOCS 1996:  [GW’00] R. Goldman, J. Widom: WSQ/DSQ: A Practical Approach for Combined Querying of Databases and the Web. SIGMOD Conference 2000:  [INSS92] Y. E. Ioannidis, R. T. Ng, K. Shim, T. K. Sellis: Parametric Query Optimization. VLDB  [IHW04] Z. G. Ives, A. Y. Halevy, D. S. Weld: Adapting to Source Properties in Data Integration Queries. SIGMOD  [K’01] M.S. Kodialam. The throughput of sequential testing. In Integer Programming and Combinatorial Optimization (IPCO)  [KBZ’86] R. Krishnamurthy, H. Boral, C. Zaniolo: Optimization of Nonrecursive Queries. VLDB  [KD’98] N. Kabra, D. J. DeWitt: Efficient Mid-Query Re-Optimization of Sub-Optimal Query Execution Plans. SIGMOD Conference 1998:  [KKM’05] H. Kaplan, E. Kushilevitz, and Y. Mansour. Learning with attribute costs. In ACM STOC,  [KNT89] Masaru Kitsuregawa, Masaya Nakayama and Mikio Takagi, "The Effect of Bucket Size Tuning in the Dynamic Hybrid GRACE Hash Join Method”. VLDB 1989.

References (3)  [LEO 01] M. Stillger, G. M. Lohman, V. Markl, M. Kandil: LEO - DB2's LEarning Optimizer. VLDB  [L+07] Quanzhong Li et al. Adaptively Reordering Joins during Query Execution; ICDE  [MRS+04] Volker Markl, et al.: Robust Query Processing through Progressive Optimization. SIGMOD Conference 2004:  [MSHR’02] S. Madden, M. A. Shah, J. M. Hellerstein, V. Raman: Continuously adaptive continuous queries over streams. SIGMOD Conference 2002:  [NKT88] M. Nakayama, M. Kitsuregawa, and M. Takagi. Hash partitioned join method using dynamic destaging strategy. In VLDB  [PCL93a] H. Pang, M. J. Carey, M. Livny: Memory-Adaptive External Sorting. VLDB 1993:  [PCL93b] H. Pang, M. J. Carey, M. Livny: Partially Preemptive Hash Joins. SIGMOD Conference  [RH’05] N. Reddy, J. Haritsa: Analyzing Plan Daigrams of Database Query Optimizers; VLDB  [SF’01] M.A. Shayman and E. Fernandez-Gaucherand: Risk-sensitive decision-theoretic diagnosis. IEEE Trans. Automatic Control,  [SHB04] M. A. Shah, J. M. Hellerstein, E. Brewer. Highly-Available, Fault-Tolerant, Parallel Dataflows, SIGMOD, June  [SHCF03] M. A. Shah, J. M. Hellerstein, S. Chandrasekaran and M. J. Franklin. Flux: An Adaptive Partitioning Operator for Continuous Query Systems, ICDE, March  [SMWM’06] U. Srivastava, K. Munagala, J. Widom, R. Motwani: Query Optimization over Web Services; VLDB  [TD03] F. Tian, D. J. Dewitt. Tuple Routing Strategies for Distributed Eddies. VLDB  [UFA’98] T. Urhan, M. J. Franklin, L. Amsaleg: Cost Based Query Scrambling for Initial Delays. SIGMOD Conference 1998:  [UF 00] T. Urhan, M. J. Franklin: XJoin: A Reactively-Scheduled Pipelined Join Operator. IEEE Data Eng. Bull. 23(2): (2000)  [VNB’03] S. Viglas, J. F. Naughton, J. Burger: Maximizing the Output Rate of Multi-Way Join Queries over Streaming Information Sources. VLDB 2003:  [WA’91] A. N. Wilschut, P. M. G. Apers: Dataflow Query Execution in a Parallel Main-Memory Environment. PDIS 1991: 68-77