Download presentation
Presentation is loading. Please wait.
Published byWilfred Curtis Modified over 7 years ago
1
Part III BigData Analysis Tools (Dremel) Yuan Xue (yuan.xue@vanderbilt.edu) Part of the Slides made by the authors
2
Overview ► Big Query is google's database service based on the Dremel. ► Big Query is hosted by Google. ► Impala and Drill is open source database inspired by the Dremel paper. ► Impala is part of the Cloudera Hadoop distribution. ► Drill is part of the MapR Hadoop distribution. Dremel BigQuery (Google hosted) Impala (Cloudera Release) Drill (MapR Release)
3
Motivation ► Google faced MapReduce main problem – latency. ► The problem was propagated to engines on top of MapReduce also. ► Google first approached it by developing real time query capability for big data. Speed matters
4
Philosophy Dremel's Philosophy ► Lets do SQL subset which do have fast and scalable implementation ► It is somewhat similar to other NoSQLs – we do what we can do VERY FAST and scalable. The rest – application problem. Main idea is to harness huge cluster of machines for the single query
5
Why call it Dremel Brand of power tools that primarily rely on their speed as opposed to torque Data analysis tool that uses speed instead of raw power 5
6
Widely used inside Google 6 Analysis of crawled web documents Tracking install data for applications on Android Market Crash reporting for Google products OCR results from Google Books Spam analysis Debugging of map tiles on Google Maps Tablet migrations in managed Bigtable instances Results of tests run on Google's distributed build system Disk I/O statistics for hundreds of thousands of disks Resource monitoring for jobs run in Google's data centers Symbols and dependencies in Google's codebase
7
Dremel vs. MapReduce ► Dremel is not replacement for the MapReduce or Tenzing but complements it. (Tenzing is Google's Hive) ► Analyst can make many fast queries using Dremel ► After getting good idea what is needed – run slow MapReduce (or SQL based on MapReduce) to get precise results
8
Example Usage: data exploration 8 Runs a MapReduce to extract billions of signals from web pages Googler Alice DEFINE TABLE t AS /path/to/data/* SELECT TOP(signal, 100), COUNT(*) FROM t... More MR-based processing on her data (FlumeJava [PLDI'10], Sawzall [Sci.Pr.'05] ) 1 2 3 Ad hoc SQL against Dremel
9
Dremel system Trillion-record, multi-terabyte datasets at interactive speed Scales to thousands of nodes Fault and straggler tolerant execution Nested data model Complex datasets; normalization is prohibitive Columnar storage and processing Tree architecture (as in web search) Interoperates with Google's data mgmt tools In situ data access (e.g., GFS, Bigtable) MapReduce pipelines 9
10
Impala HDFS Map Reduce Hive and Pig Impala
11
Details of Dremel Design Nested columnar storage Query processing Experiments Observations 11
12
Records vs. columns 12 A B CD E * * *... r1r1 r2r2 r1r1 r2r2 r1r1 r2r2 r1r1 r2r2 Challenge: preserve structure, reconstruct from a subset of fields Read less, cheaper decompression DocId: 10 Links Forward: 20 Name Language Code: 'en-us' Country: 'us' Url: 'http://A' Name Url: 'http://B' r1r1
13
Nested data model 13 message Document { required int64 DocId; [1,1] optional group Links { repeated int64 Backward; [0,*] repeated int64 Forward; } repeated group Name { repeated group Language { required string Code; optional string Country; [0,1] } optional string Url; } } DocId: 10 Links Forward: 20 Forward: 40 Forward: 60 Name Language Code: 'en-us' Country: 'us' Language Code: 'en' Url: 'http://A' Name Url: 'http://B' Name Language Code: 'en-gb' Country: 'gb' r1r1 DocId: 20 Links Backward: 10 Backward: 30 Forward: 80 Name Url: 'http://C' r2r2 http://code.google.com/apis/protocolbuffers multiplicity:
14
Column-striped representation valuerd 1000 2000 14 valuerd 2002 4012 6012 8002 valuerd NULL01 1002 3012 DocId valuerd http://A02 http://B12 NULL11 http://C02 Name.Url valuerd en-us02 en22 NULL11 en-gb12 NULL01 Name.Language.CodeName.Language.Country Links.BackwardLinks.Forward valuerd us03 NULL22 11 gb13 NULL01
15
Repetition and definition levels 15 DocId: 10 Links Forward: 20 Forward: 40 Forward: 60 Name Language Code: 'en-us' Country: 'us' Language Code: 'en' Url: 'http://A' Name Url: 'http://B' Name Language Code: 'en-gb' Country: 'gb' r1r1 DocId: 20 Links Backward: 10 Backward: 30 Forward: 80 Name Url: 'http://C' r2r2 valuerd en-us02 en22 NULL11 en-gb12 NULL01 Name.Language.Code r: At what repeated field in the field's path the value has repeated d: How many fields in paths that could be undefined (opt. or rep.) are actually present record (r=0) has repeated r=2r=1 Language (r=2) has repeated (non-repeating)
16
Record assembly FSM 16 Name.Language.CountryName.Language.Code Links.BackwardLinks.Forward Name.Url DocId 1 0 1 0 0,1,2 2 0,1 1 0 0 For record-oriented data processing (e.g., MapReduce) Transitions labeled with repetition levels
17
Reading two fields 17 DocId Name.Language.Country 1,2 0 0 DocId: 10 Name Language Country: 'us' LanguageName Country: 'gb' DocId: 20 Name s1s1 s2s2 Structure of parent fields is preserved. Useful for queries like /Name[3]/Language[1]/Country
18
Outline Nested columnar storage Query processing Experiments Observations 18
19
Query processing Optimized for select-project-aggregate Very common class of interactive queries Single scan Within-record and cross-record aggregation 19
20
SQL dialect for nested data 20 Id: 10 Name Cnt: 2 Language Str: 'http://A,en-us' Str: 'http://A,en' Name Cnt: 0 t1t1 SELECT DocId AS Id, COUNT(Name.Language.Code) WITHIN Name AS Cnt, Name.Url + ',' + Name.Language.Code AS Str FROM t WHERE REGEXP(Name.Url, '^http') AND DocId < 20; message QueryResult { required int64 Id; repeated group Name { optional uint64 Cnt; repeated group Language { optional string Str; } } } Output table Output schema
21
Serving tree 21 storage layer (e.g., GFS)... leaf servers (with local storage) intermediate servers root server client Parallelizes scheduling and aggregation Fault tolerance Stragglers Designed for "small" results (<1M records) [Dean WSDM'09] histogram of response times
22
Example: count() 22 SELECT A, COUNT(B) FROM T GROUP BY A T = {/gfs/1, /gfs/2, …, /gfs/100000} SELECT A, SUM(c) FROM (R 1 1 UNION ALL R 1 10) GROUP BY A SELECT A, COUNT(B) AS c FROM T 1 1 GROUP BY A T 1 1 = {/gfs/1, …, /gfs/10000} SELECT A, COUNT(B) AS c FROM T 1 2 GROUP BY A T 1 2 = {/gfs/10001, …, /gfs/20000} SELECT A, COUNT(B) AS c FROM T 3 1 GROUP BY A T 3 1 = {/gfs/1}... 0 1 3 R11R11R12R12 Data access ops...
23
Outline Nested columnar storage Query processing Experiments Observations 23
24
Experiments Table name Number of records Size (unrepl., compressed) Number of fields Data center Repl. factor T185 billion87 TB270A3× T224 billion13 TB530A3× T34 billion70 TB1200A3× T41+ trillion105 TB50B3× T51+ trillion20 TB30B2× 24 1 PB of real data (uncompressed, non-replicated) 100K-800K tablets per table Experiments run during business hours
25
Read from disk 25 columns records objects from records from columns (a) read + decompress (b) assemble records (c) parse as C++ objects (d) read + decompress (e) parse as C++ objects time (sec) number of fields "cold" time on local disk, averaged over 30 runs Table partition: 375 MB (compressed), 300K rows, 125 columns 2-4x overhead of using records 10x speedup using columnar storage
26
MR and Dremel execution 26 Sawzall program ran on MR: num_recs: table sum of int; num_words: table sum of int; emit num_recs <- 1; emit num_words <- count_words(input.txtField); execution time (sec) on 3000 nodes SELECT SUM(count_words(txtField)) / COUNT(*) FROM T1 Q1: 87 TB0.5 TB MR overheads: launch jobs, schedule 0.5M tasks, assemble records Avg # of terms in txtField in 85 billion record table T1
27
Impact of serving tree depth 27 execution time (sec) SELECT country, SUM(item.amount) FROM T2 GROUP BY country SELECT domain, SUM(item.amount) FROM T2 WHERE domain CONTAINS ’.net’ GROUP BY domain Q2: Q3: 40 billion nested items (returns 100s of records)(returns 1M records)
28
Scalability 28 execution time (sec) number of leaf servers SELECT TOP(aid, 20), COUNT(*) FROM T4 Q5 on a trillion-row table T4:
29
Outline Nested columnar storage Query processing Experiments Observations 29
30
Interactive speed 30 execution time (sec) percentage of queries Most queries complete under 10 sec Monthly query workload of one 3000-node Dremel instance
31
Observations Possible to analyze large disk-resident datasets interactively on commodity hardware 1T records, 1000s of nodes MR can benefit from columnar storage just like a parallel DBMS But record assembly is expensive Interactive SQL and MR can be complementary Parallel DBMSes may benefit from serving tree architecture just like search engines 31
32
BigQuery: powered by Dremel 32 http://code.google.com/apis/bigquery/ 1. Upload 2. Process Upload your data to Google Storage Import to tables Run queries 3. Act Your Data BigQuery Your Apps
33
Impala architecture
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.