Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dapper, a Large-Scale Distributed System Tracing Infrastructure

Similar presentations


Presentation on theme: "Dapper, a Large-Scale Distributed System Tracing Infrastructure"— Presentation transcript:

1 Dapper, a Large-Scale Distributed System Tracing Infrastructure
Google Technical Report, 2010 Author: B.H.Sigelman, L.A.Barroso, M.Burrows, P.Stephenson, M.Plakal, D.Beaver, S.Jaspan, C.Shanbhag Presenter: Lei Jinjiang

2 Background Modern Internet services are often implemented as complex, large-scale distributed systems. These applications are constructed from collections of software modules that may be developed by different teams, perhaps in different programming language, and could span many thousand of machines across multiple physical facilities

3 Background Imagine a single search request coursing through Google’s massive infrastructure. A single request can run across thousands of machines and involve hundred of different subsystems. And oh by the way, you are processing more requests per second than any other system in the world.

4 Problem That is where dapper comes in! How do you debug such a system?
How do you figure out where the problem are? How do you determine if programmers are coding correctly? How do you keep sensitive data secret and safe? How do ensure products don’t use more resources than the are assigned? How do you store all the data? How do you make use of it? That is where dapper comes in!

5 Dapper Dapper is Google's tracing system and it was originally created to understand the system behavior from a search request. Now Google's production clusters generate more than 1 terabyte of sampled trace data per day.

6 Requirements and Design Goals
(1) Ubiquitous deployment (2) Continuous monitoring Design Goals: (1) Low overhead (2) Application-level transparency (3) Scalability

7 Distributed Tracing in Dapper
Two class of solutions: Black-box vs. annotation-based

8 Trace trees and spans The causal and temporal relationships between five spans in a Dapper trace tree

9 Mindful of time skew! Trees and Spans
A detailed view of a single span from last Figure Mindful of time skew!

10 Instrumentation points
When a thread handles a traced control path, Dapper attaches a trace context to thread-local storage. Most Google developers use a common control flow library to construct callbacks. Dapper ensures that all such callback store the trace context

11 Callback In computer programming, a callback is a reference to executable code, or a piece of executable code, that is passed as an argument to other code. This allows a lower-level software layer to call a subroutine (or function) defined in a higher-level layer.

12 Annotations // C++: const string& request = ...; if (HitCache())
TRACEPRINTF("cache hit for %s", request.c_str()); else TRACEPRINTF("cache miss for %s", request.c_str()); // Java: Tracer t = Tracer.getCurrentTracer(); String request = ...; if (hitCache()) t.record("cache hit for " + request); t.record("cache miss for " + request);

13 Sampling Low overhead was a key design goal for Dapper, since service operators would be understandably reluctant to deploy a new tool of yet unproven value if it had any significant impact on performance… Therefore, … , we further control overhead by recording only a fraction of all traces.

14 Trace collection

15 Out-of-band trace collection
Firstly, the in-band-Dapper trace data would dwarf the application data and bias the results of subsequent analyses. Secondly, many middleware systems which return a result to their caller before all of their own backend have returned a final result. Security and privacy considerations

16 Production coverage Given how ubiquitous Dapper-instrumented libraries are, we estimate that nearly every Google production process supports tracing. There are cases where Dapper is unable to follow the control path correctly. These typically stem from the use of non-standard control-flow primitives. Dapper tracing can be turned off as a production safety measure.

17 Use of trace annotations
Currently, 70% of all Dapper spans and 90% of all Dapper traces have at least one application-specified annotation. 41 Java and 68 C++ applications have added custom application annotations in order to better understand intra-span activity in their sevices.

18 Trace collection overhead
Process count (per host) Data Rate (per process) Daemon CPU Usage (single CPU core) 25 10K/sec 0.125% 10 200K/sec 0.267% 50 2K/sec 0.130% CPU resource usage for the Dapper daemon during load testing The daemon never uses more than 0.3% of one core of a production machine during collection, and has a very small memory footprint. Restrict the Dapper daemon to the lowest possible priority in the kernel scheduler. 426 bytes/span, less than 0.01% of the network traffic in Google’s production environment

19 Trace collection overhead
Sampling frequency Avg. Latency (% change) Avg. Throughput 1/1 16.3% -1.48% 1/2 9.40% -0.73% 1/4 6.38% -0.30% 1/8 4.12% -0.23% 1/16 2.12% -0.08% 1/1024 -0.20% -0.06% The effect of different [non-adaptive] Dapper sampling frequencies on the latency and throughput of a Web search cluster. The experimental errors for these latency and throughput measurements are 2.5% and 0.15% respectively.

20 Adaptive sampling Lower traffic workloads may miss important events at such low sampling rate(1/1024). Workloads with low traffic automatically increase their sampling rate while those with very high traffic will lower it so that overheads remain under control. Reliability …

21 Additional sampling during collection
The dapper team also need to control the total size of data written to its central repositories, and thus we incorporate a second round of sampling for that purpose. For each span seen in the collection system, we hash the associate trace id as a scalar z, where 0≤z≤1. If z is less than our collection sampling coefficient, we keep the span and write it to the Bigtable. Otherwise, we discard it.

22 The Dapper Depot API Access by trace id
Bulk access: Access to billions of Dapper in parallel Indexed access: The index maps from commonly requested trace feature(host machine, service names) to distinct dapper traces.

23 User interface

24 Experiences Using Dapper during development
(Integration with exception monitoring) Addressing long tail latency Inferring service dependencies Network usage of differ services Layered and Shared Storage Systems (e.g. GFS)

25 Other Lessons Learned Coalescing effect Tracing batch workloads
Finding a root cause Logging kernel-level information

26 Thank you!


Download ppt "Dapper, a Large-Scale Distributed System Tracing Infrastructure"

Similar presentations


Ads by Google