Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tail Latency: Networking

Similar presentations


Presentation on theme: "Tail Latency: Networking"— Presentation transcript:

1 Tail Latency: Networking

2 The story thus far Tail latency is bad Causes:
Resource contention with background jobs Device failure Uneven-split of data between tasks Network congestion for reducers

3 Ways to address tail latency
Clone all tasks Clone slow tasks Copy intermediate data Remove/replace frequently failing machines Spread out reducers

4 What is missing from this picture?
Networking: Spreading out reducers is not sufficient. The network is extremely crucial Studies on Facebook traces show that [Orchestra] in 26% of jobs, shuffle is 50% of runtime. in 16% of jobs, shuffle is more than 70% of runtime 42% of tasks spend over 50% of their time writing to HDFS

5 Other implication of Network Limits Scalability
Scalability of Netflix-like recommendation system is bottlenecked by communication Did not scale beyond 60 nodes Comm. time increased faster than comp. time decreased Orchestra slide

6 What is the Impact of the Network
Assume 10ms deadline for tasks [DCTCP] Simulate job completion times based on distributions of tasks completion times For 40 about 4 tasks (14%) for tasks [3%] fail respectively DeTail

7 What is the Impact of the Network
Assume 10ms deadline for tasks [DCTCP] Simulate job completion times based on distributions of tasks completion times (focus on 99.9%) For 40 about 4 tasks (14%) for tasks [3%] fail respectively D3 Slides

8 What is the Impact of the Network
Assume 10ms deadline for tasks [DCTCP] Simulate job completion times based on distributions of tasks completion times For 40 about 4 tasks (14%) for tasks [3%] fail respectively Detail slides

9 Other implication of Network Limits Scalability
Scalability of Netflix-like recommendation system is bottlenecked by communication Did not scale beyond 60 nodes Comm. time increased faster than comp. time decreased Orchestra slide

10 What Causes this Variation in Network Transfer Times?
First let’s look at type of traffic in network Background Traffic Latency sensitive short control messages; e.g. heart beats, job status Large files: e.g. HDFS replication, loading of new data Map-reduce jobs Small RPC-request/response with tight deadlines HDFS reads or writes with tight deadlines

11 What Causes this Variation in Network Transfer Times?
No notion of priority Latency sensitive and non-latency sensitive share the network equally. Uneven load-balancing ECMP doesn’t schedule flows evenly across all paths Assume long and short are the same Bursts of traffic Networks have buffers which reduce loss but introduce latency (time waiting in buffer is variable) Kernel optimization introduce burstiness

12 Ways to Eliminate Variation and Improve tail latency
Make the network faster HULL, DeTail, DCTCP Faster networks == smaller tail Optimize how application use the network Orchestra, CoFlows Specific big-data transfer patterns, optimize the patterns to reduce transfer time Make the network aware of deadlines D3, PDQ Tasks have deadlines. No point doing any work if deadline wouldn’t be met Try and prioritize flows and schedule them based on deadline.

13 Fair-Sharing or Deadline-based sharing
Fair-share (Status-Quo) Every one plays nice but some deadlines lines can be missed Deadline-based Deadlines met but may require non-trial implemantionat Two ways to do deadline-based sharing Earliest deadline first (PDQ) Make BW reservations for each flow Flow rate = flow size/flow deadline Flow size & deadline are known apriori

14 Fair-Sharing or Deadline-based sharing
Two versions of deadline-based sharing Earliest deadline first (PDQ) Make BW reservations for each flow Flow rate = flow size/flow deadline Flow size & deadline are known apriori D3 slides

15 Issues with Deadline Based Scheduling
Implications for non-deadline based jobs Starvation? Poor completion times? Implementation Issues Assign deadlines to flows not packets Reservation approach Requires reservation for each flow Big data flows: can be small & have small RTT Control loop must be extremelly fast Earliest deadline first Requires coordination between switches & servers Servers: specify flow deadline Switches: priority flows and determine rate May require complex switch mechanisms

16 How do you make the Network Faster
Throw more hardware at the problem Fat-Tree, VL2, B-Cube, Dragonfly Increases bandwidth (throughput) but not necessarily latency

17 So, how do you reduce latency
Trade bandwidth for latency Buffering adds variation (unpredictability) Eliminate network buffering & bursts Optimize the network stack Use link level information to detect congestion Inform application to adapt by using a different path

18 HULL: Trading BW for Latency
Buffering introduces latency Buffer is used to accommodate bursts To allow congestion control to get good throughput Removing buffers means Lower throughput for large flows Network can’t handle bursts Predictable low latency

19 Why do Bursts Exists? Systems review:
NIC (network Card) informs OS of packets via interrupt Interrupt consume CPU If one interrupt for each packet the CPU will be overwhelmed Optimization: batch packets up before calling interrupt Size of the batch is the size of the burst

20 Why do Bursts Exists? Systems review:
NIC (network Card) informs OS of packets via interrupt Interrupt consume CPU If one interrupt for each packet the CPU will be overwhelmed Optimization: batch packets up before calling interrupt Size of the batch is the size of the burst Hull slides. Would like to use actual table or something better here.

21 Why Does Congestion Need buffers?
Congestion Control AKA TCP Detects bottleneck link capacity through packet loss When loss it halves its sending rate. Buffers help the keep the network busy Important for when TCP reduce sending rate by half Essentially the network must double capacity for TCP to work well. Buffer allow for this doubling BAD SLIDE: NEEDS HELP

22 TCP Review Bandwidth-delay product rule of thumb: B < C×RTT
A single flow needs C×RTT buffers for 100% Throughput. B 100% B < C×RTT 100% B B ≥ C×RTT Buffer Size DCTCP paper Now in the case of TCP, the question of how much buffering is needed for high throughput has been studied and is known in the literature as the buffer sizing problem. End with: “So we need to find a way to reduce the buffering requirements.” Throughput

23 Key Idea Behind Hull Eliminate Bursts Eliminate Buffering
Add a token bucket (Pacer) into the network Pacer must be in the network so it happens after the system optimizations that cause bursts. Eliminate Buffering Send congestion notification messages before link it fully utilized Make applications believe the link is full when there’s still capacity TCP has poor congestion control algorithm Replace with DCTCP

24 Key Idea Behind Hull Eliminate Bursts Eliminate Buffering
Add a token bucket (Pacer) into the network Pacer must be in the network so it happens after the system optimizations that cause bursts. Eliminate Buffering Send congestion notification messages before link it fully utilized Make applications believe the link is full when there’s still capacity

25 Orchestra: Managing Data Transfers in Computer Clusters
Group all flows belonging to a stage into a transfer Perform inter-transfer coordination Optimize at the level of transfer rather than individual flows Orchestra slide

26 Transfer Patterns HDFS Transfer: set of all flows transporting data between two stages of a job Acts as a barrier Completion time: Time for the last receiver to finish Broadcast Map Map Map Shuffle Reduce Reduce Incast* HDFS

27 Orchestra Cooperative broadcast (Cornet)
Infer and utilize topology information Weighted Shuffle Scheduling (WSS) Assign flow rates to optimize shuffle completion time Inter-Transfer Controller Implement weighted fair sharing between transfers End-to-end performance ITC Inter-Transfer Controller (ITC) Fair sharing FIFO Priority Shuffle Transfer Controller (TC) TC (shuffle) Broadcast Transfer Controller (TC) TC (broadcast) TC (broadcast) Broadcast Transfer Controller (TC) Hadoop shuffle WSS HDFS Tree Cornet HDFS Tree Cornet Orchestra slide shuffle broadcast 1 broadcast 2

28 Cornet: Cooperative broadcast
Broadcast same data to every receiver Fast, scalable, adaptive to bandwidth, and resilient Peer-to-peer mechanism optimized for cooperative environments Use bit-torrent to distribute data Orchestra slide Observations Cornet Design Decisions High-bandwidth, low-latency network Large block size (4-16MB) No selfish or malicious peers No need for incentives (e.g., TFT) No (un)choking Everyone stays till the end Topology matters Topology-aware broadcast

29 Topology-aware Cornet
Many data center networks employ tree topologies Each rack should receive exactly one copy of broadcast Minimize cross-rack communication Topology information reduces cross-rack data transfer Mixture of spherical Gaussians to infer network topology Orchestra slide

30 Shuffle bottlenecks At a sender At a receiver In the network Orchestra slide An optimal shuffle schedule must keep at least one link fully utilized throughout the transfer

31 Status quo in Shuffle r1 r2 s1 s2 s3 s4 s5
Orchestra slide Links to r1 and r2 are full: 3 time units Link from s3 is full: 2 time units Completion time: 5 time units

32 Weighted Shuffle Scheduling
Allocate rates to each flow using weighted fair sharing, where the weight of a flow between a sender-receiver pair is proportional to the total amount of data to be sent r1 r2 1 2 s1 s2 s3 s4 s5 Orchestra slide Completion time: 4 time units Up to 1.5X improvement

33 Faster spam classification
Orchestra slide Communication reduced from 42% to 28% of the iteration time Overall 22% reduction in iteration time

34 Summary Discuss tail latency in network Discuss Hull:
Types of traffic in network Implications on jobs Cause of tail latency Discuss Hull: Trade Bandwidth for latency Penalize huge flows Eliminate bursts and buffering Discuss Orchestra: Optimize transfers instead of individual flows Utilize knowledge about application semantics


Download ppt "Tail Latency: Networking"

Similar presentations


Ads by Google