Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ch. 7 : Internet Transport Protocols.

Similar presentations


Presentation on theme: "Ch. 7 : Internet Transport Protocols."— Presentation transcript:

1 Ch. 7 : Internet Transport Protocols

2 Transport Layer Our goals: Chapter 6:
understand principles behind transport layer services: Multiplexing / demultiplexing data streams of several applications reliable data transfer flow control congestion control Chapter 6: rdt principles Chapter 7: multiplex/ demultiplex Internet transport layer protocols: UDP: connectionless transport TCP: connection-oriented transport connection setup data transfer flow control congestion control Transport Layer 2

3 Transport vs. network layer
Transport Layer Network Layer logical communication between processes logical communication between hosts exists only in hosts exists in hosts and in routers ignores network routes data through network Port #s used for routing in destination computer IP addresses used for routing in network Transport layer uses Network layer services adds more value to these services

4 Multiplexing & Demultiplexing

5 Multiplexing/demultiplexing
gather data from multiple sockets, envelop data with headers (later used for demultiplexing), pass to L3 Multiplexing at send host: = process = socket receive segment from L3 deliver each received segment to correct socket Demultiplexing at rcv host: application transport network link physical application transport network link physical application transport network link physical P4 P3 P1 P1 P2 host 3 host 1 host 2

6 How demultiplexing works
32 bits host receives IP datagrams each datagram has source IP address, destination IP address in its header each datagram carries one transport-layer segment each segment has source, destination port number in its header host uses port numbers, and sometimes also IP addresses to direct segment to correct socket from socket data gets to the relevant application process source IP addr dest IP addr. L3 hdr other IP header fields source port # dest port # L4 header other header fields application data (message) appl. msg TCP/UDP segment format

7 Connectionless demultiplexing (UDP)
When server receives a UDP segment: checks destination port number in segment directs UDP segment to the socket with that port number (packets from different remote sockets directed to same socket) the UDP message waits in socket queue and is processed in its turn. answer message sent to the client UDP socket (listed in Source fields of query packet) Processes create sockets with port numbers a UDP socket is identified by a pair of numbers: (my IP address , my port number) Client decides to contact: a server ( peer IP-address) + an application ( peer port #) Client puts those into the UDP packet he sends; they are written as: dest IP address - in the IP header of the packet dest port number - in its UDP header

8 Connectionless demux (cont)
client socket: port=5775, IP=B P2 client socket: port=9157, IP=A P3 server socket: port=53, IP = C Client IP:B P1 L5 SP: 53 DP: 5775 S-IP: C D-IP: B L4 SP: 53 DP: 9157 S-IP: C D-IP: A Reply Reply L3 L2 message SP: 9157 DP: 53 S-IP: A D-IP: C message message L1 SP: 5775 DP: 53 S-IP: B D-IP: C message Wait for application client IP: A server IP: C IP-Header Getting Service Getting Service UDP-Header SP = Source port number DP= Destination port number S-IP= Source IP Address D-IP=Destination IP Address SP and S-IP provide “return address”

9 Connection-oriented demux (TCP)
TCP socket identified by 4-tuple: local (my) IP address local (my) port number remote IP address remote port number receiving host uses all four values to direct segment to appropriate socket Server host may support many simultaneous TCP sockets: each socket identified by its own 4-tuple Web servers have a different socket for each connecting client If you open two browser windows, you generate 2 sockets at each end non-persistent HTTP will open a different socket for each request

10 Connection-oriented demux (cont)
client socket: LP= 9157, L-IP= A RP= 80 , R-IP= C P1 client socket: LP= 9157, L-IP= B RP= 80 , R-IP= C P4 server socket: LP= 80 , L-IP= C RP= 9157, R-IP= A P6 server socket: LP= 80 , L-IP= C RP= 5775, R-IP= B L5 P5 server socket: LP= 80 , L-IP= C RP= 9157, R-IP= B P2 client socket: LP= 5775, L-IP= B RP= 80 , R-IP= C P1 P3 SP: 5775 DP: 80 D-IP: C S-IP: B packet: message L4 L3 L2 H3 H4 SP: 9157 DP: 80 S-IP: A D-IP: C packet: message L1 SP: 9157 DP: 80 packet: D-IP: C S-IP: B message client IP: A server IP: C Client IP: B LP= Local Port , RP= Remote Port L-IP= Local IP , R-IP= Remote IP “L”= Local = My “R”= Remote = Peer

11 UDP Protocol

12 UDP: User Datagram Protocol [RFC 768]
simple transport protocol “best effort” service, UDP segments may be: lost delivered out of order to application with no correction by UDP UDP will discard bad checksum segments if so configured by application connectionless: no handshaking between UDP sender, receiver each UDP segment handled independently of others Why is there a UDP? no connection establishment saves delay no congestion control: better delay & BW simple: small segment header typical usage: realtime appl. loss tolerant rate sensitive other uses (why?): DNS SNMP

13 UDP segment structure source port # dest port # length checksum
32 bits Total length of segment (bytes) source port # dest port # length checksum application data (variable length) Checksum computed over: the whole segment part of IP header: both IP addresses protocol field total IP packet length Checksum usage: computed at destination to detect errors in case of error, UDP will discard the segment, or

14 UDP checksum Goal: detect “errors” (e.g., flipped bits) in transmitted segment Receiver: compute checksum of received segment check if computed checksum equals checksum field value: NO - error detected YES - no error detected. Sender: treat segment contents as sequence of 16-bit integers checksum: addition (1’s complement sum) of segment contents sender puts checksum value into UDP checksum field 1’s complement = add carry bit

15 TCP Protocol

16 TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581 point-to-point:
one sender, one receiver between sockets reliable, in-order byte steam: no “message boundaries” pipelined: TCP congestion and flow control set window size send & receive buffers full duplex data: bi-directional data flow in same connection MSS: maximum segment size connection-oriented: handshaking (exchange of control msgs) init’s sender, receiver state before data exchange flow controlled: sender will not overwhelm receiver

17 TCP segment structure source port # dest port # application data
hdr length in 32 bit words source port # dest port # 32 bits application data (variable length) sequence number acknowledgement number rcvr window size ptr urgent data checksum F S R P A U head len not used Options (variable length) URG: urgent data (generally not used) counting by bytes of data (not segments!) ACK: ACK # valid PSH: push data now (generally not used) # bytes rcvr willing to accept RST, SYN, FIN: connection estab (setup, teardown commands) Internet checksum (as in UDP)

18 TCP sequence # (SN) and ACK (AN)
byte stream “number” of first byte in segment’s data AN: SN of next byte expected from other side cumulative ACK Qn: how receiver handles out-of-order segments? puts them in receive buffer but does not acknowledge them Host A Host B host A sends 100 data bytes SN=42, AN=79, 100 data bytes host B ACKs 100 bytes and sends 50 data SN=79, AN=142, 50 data bytes host ACKs receipt of data , sends no data WHY? SN=142, AN=129 , no data time simple data transfer scenario (some time after conn. setup)

19 Connection Setup: Objective
Agree on initial sequence numbers a sender should not reuse a seq# before it is sure that all packets with the seq# are purged from the network the network guarantees that a packet too old will be purged from the network: network bounds the life time of each packet To avoid waiting for seq #s to disappear, start new session with a seq# far away from previous needs connection setup so that the sender tells the receiver initial seq# Agree on other initial parameters e.g. Maximum Segment Size 19

20 TCP Connection Management
Setup: establish connection between the hosts before exchanging data segments called: 3 way handshake initialize TCP variables: seq. #s buffers, flow control info (e.g. RcvWindow) client : connection initiator opens socket and cmds OS to connect it to server server : contacted by client has waiting socket accepts connection generates working socket Teardown: end of connection (we skip the details) Three way handshake: Step 1: client host sends TCP SYN segment to server specifies initial seq # no data Step 2: server host receives SYN, replies with SYNACK segment (also no data) allocates buffers specifies server initial SN & window size Step 3: client receives SYNACK, replies with ACK segment, which may contain data

21 TCP Three-Way Handshake (TWH)
B X+1 Y+1 Send Buffer Send Buffer Y+1 X+1 Receive Buffer Receive Buffer SYN , SN = X SYNACK , SN = Y, AN = X+1 ACK , SN = X+1 , AN = Y+1

22 I am done. Are you done too?
Connection Close Objective of closure handshake: each side can release resource and remove state about the connection Close the socket client server initial close : I am done. Are you done too? release resource? close I am done too. Goodbye! release resource close release resource 22

23 Ch. 7 : Internet Transport Protocols Part B

24 TCP reliable data transfer
TCP creates reliable service on top of IP’s unreliable service pipelined segments cumulative acks single retransmission timer receiver accepts out of order segments but does not acknowledge them Retransmissions are triggered by timeout events Initially consider simplified TCP sender: ignore flow control, congestion control

25 TCP sender events: data rcvd from app: create segment with seq #
seq # is byte-stream number of first data byte in segment start timer if not already running (think of timer as for oldest unACKed segment) expiration interval: TimeOutInterval timeout: retransmit segment that caused timeout restart timer ACK rcvd: if acknowledges previously unACKed segments update what is known to be ACKed start timer if there are outstanding segments

26 TCP sender (simplified)
NextSeqNum = InitialSeqNum SendBase = InitialSeqNum loop (forever) { switch(event) event: data received from application above create TCP segment with sequence number NextSeqNum if (timer currently not running) start timer pass segment to IP NextSeqNum = NextSeqNum + length(data) event: timer timeout retransmit not-yet-acknowledged segment with smallest sequence number event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } } /* end of loop forever */ TCP sender (simplified) Comment: SendBase-1: last cumulatively ACKed byte Example: SendBase-1 = 71; y= 73, so the rcvr wants 73+ ; y > SendBase, so that new data is ACKed Transport Layer

27 TCP actions on receiver events:
data rcvd from IP: if Checksum fails, ignore segment If checksum OK, then : if data came in order: update AN+WIN AN grows by the number of new in-order bytes WIN decreases by same # if data out of order: Put in buffer, but don’t count it for AN/ WIN application takes data: free the room in buffer give the freed cells new numbers circular numbering WIN increases by the number of bytes taken

28 TCP: retransmission scenarios
Host A Host B Host A Host B start timer for SN 92 start timer for SN 92 SN=92, 8 bytes data TIMEOUT SN=92, 8 bytes data AN=100 AN=100 stop timer SN= , 20 bytes data X loss start timer for SN 100 AN=120 SN=92, 8 bytes data start timer for new SN 92 stop timer stop timer NO timer AN=100 time A. normal scenario NO timer timer setting actual timer run time B. lost ACK + retransmission 28

29 TCP retransmission scenarios (more)
Host A Host B Host A Host B start timer for SN 92 start timer for SN 92 stop timer SN=92, 8 bytes data TIMEOUT SN=92, 8 bytes data SN=100, 20 bytes data AN=100 AN=100 SN=100, 20 bytes data X loss AN=120 AN=120 star fort 92 SN=92, 8 bytes data stop stop start for 100 AN=120 NO timer NO timer time redundant ACK C. lost ACK, NO retransmission time D. premature timeout אפקה תשע"א ס"ב Transport Layer 3-29

30 TCP ACK generation [RFC 1122, RFC 2581]
Event at Receiver Arrival of in-order segment with expected seq #. All data up to expected seq # already ACKed expected seq #. One other segment has ACK pending Arrival of out-of-order segment with higher-than-expect seq. # . Gap detected Arrival of segment that partially or completely fills gap TCP Receiver action Delayed ACK. Wait up to 500ms for next segment. If no data segment to send, then send ACK Immediately send single cumulative ACK, ACKing both in-order segments Immediately send duplicate ACK, indicating seq. # of next expected byte This Ack carries no data & no new WIN Immediately send ACK, provided that segment starts at lower end of gap Transport Layer

31 Fast Retransmit time-out period often relatively long:
Causes long delay before resending lost packet detect lost segments via duplicate ACKs. sender often sends many segments back-to-back if segment is lost, there will likely be many duplicate ACKs for that segment If sender receives 3 ACKs for same data, it assumes that segment after ACKed data was lost: fast retransmit: resend segment before timer expires Transport Layer

32 X time Host A Host B seq # x1 seq # x2 seq # x3 ACK # x2 seq # x4
triple duplicate ACKs resend seq X2 timeout time Transport Layer

33 Fast retransmit algorithm:
event: ACK received, with ACK field value of y if (y > SendBase) { SendBase = y if (there are currently not-yet-acknowledged segments) start timer } else {if (segment carries no data & doesn’t change WIN) increment count of dup ACKs received for y if (count of dup ACKs received for y = 3) { { resend segment with sequence number y count of dup ACKs received for y = 3 } a duplicate ACK for already ACKed segment fast retransmit Transport Layer

34 TCP: Flow Control

35 TCP Flow Control for A’s data
sender won’t overflow receiver’s buffer by transmitting too much, too fast flow control receive side of TCP connection at B has a receive buffer: AN Receive Buffer data from IP (sent by TCP at A) TCP data in buffer spare room flow control matches the send rate of A to the receiving application’s drain rate at B Receive buffer size set by OS at connection init WIN = window size = number bytes A may send starting at AN data taken by application WIN node B : Receive process application process at B may be slow at reading from buffer

36 TCP Flow control: how it works
ACKed data in buffer Rcv Buffer data from IP data taken by application WIN (sent by TCP at A) s p a r e r o o m non-ACKed data in buffer (arrived out of order) ignored AN node B : Receive process Formulas: AN = first byte not received yet sent to A in TCP header AckedRange = = AN – FirstByteNotReadByAppl = = # bytes rcvd in sequence & not taken WIN = RcvBuffer – AckedRange = SpareRoom AN and WIN sent to A in TCP header Data rcvd out of sequence is considered part of ‘spare room’ range Procedure: Rcvr advertises “spare room” by including value of WIN in his segments Sender A is allowed to send at most WIN bytes in the range starting with AN guarantees that receive buffer doesn’t overflow NOTE: The data that arrived out of order is included in count of the “spare room” This is the reason for the quotes in text אפקה תשע"א ס"ב

37 בקרת זרימה של TCP – דוגמה 1
אפקה תשע"א ס"ב

38 בקרת זרימה של TCP – דוגמה 2
אפקה תשע"א ס"ב

39 TCP: setting timeouts

40 TCP Round Trip Time and Timeout
Q: how to set TCP timeout value? longer than RTT note: RTT will vary too short: premature timeout unnecessary retransmissions too long: slow reaction to segment loss Q: how to estimate RTT? SampleRTT: measured time from segment transmission until ACK receipt ignore retransmissions, cumulatively ACKed segments SampleRTT will vary, want estimated RTT “smoother” use several recent measurements, not just current SampleRTT

41 Set timeout = average + safe margin
High-level Idea Set timeout = average + safe margin 41

42 Estimating Round Trip Time
SampleRTT: measured time from segment transmission until ACK receipt SampleRTT will vary, want a “smoother” estimated RTT use several recent measurements, not just current SampleRTT EstimatedRTT = (1- )*EstimatedRTT + *SampleRTT Exponential weighted moving average influence of past sample decreases exponentially fast typical value:  = 0.125 42

43 Setting Timeout DevRTT = (1-)*DevRTT + *|SampleRTT-EstimatedRTT|
Problem: using the average of SampleRTT will generate many timeouts due to network variations Solution: EstimtedRTT plus “safety margin” large variation in EstimatedRTT -> larger safety margin RTT freq. DevRTT = (1-)*DevRTT + *|SampleRTT-EstimatedRTT| (typically,  = 0.25) Then set timeout interval: TimeoutInterval = EstimatedRTT + 4*DevRTT 43

44 An Example TCP Session

45 TCP: Congestion Control

46 TCP Congestion Control
Closed-loop, end-to-end, window-based congestion control Designed by Van Jacobson in late 1980s, based on the AIMD alg. of Dah-Ming Chu and Raj Jain Works well so far: the bandwidth of the Internet has increased by more than 200,000 times Many versions TCP/Tahoe: this is a less optimized version TCP/Reno: many OSs today implement Reno type congestion control TCP/Vegas: not currently used For more details: see TCP/IP illustrated; or read for linux implementation 46

47 TCP & AIMD: congestion Dynamic window size [Van Jacobson]
Initialization: MI Slow start Steady state: AIMD Congestion Avoidance Congestion = timeout TCP Tahoe Congestion = timeout || 3 duplicate ACK TCP Reno & TCP new Reno Congestion = higher latency TCP Vegas

48 Visualization of the Two Phases
MSS Congestion avoidance threshold Congwing Slow start 48

49 TCP Slowstart: MI Slowstart algorithm initialize: Congwin = 1 MSS
Host A Host B Slowstart algorithm one segment initialize: Congwin = 1 MSS for (each segment ACKed) Congwin+MSS until (congestion event OR CongWin > threshold) RTT two segments four segments exponential increase (per RTT) in window size (not so slow!) In case of timeout: Threshold=CongWin/2 time

50 TCP Tahoe Congestion Avoidance
/* slowstart is over */ /* Congwin > threshold */ Until (timeout) { /* loss event */ on every ACK: CWin/MSS+= 1/(Cwin/MSS) } threshold = Congwin/2 Congwin = 1 MSS perform slowstart TCP Tahoe

51 TCP Reno Fast retransmit: Fast recovery:
Try to avoid waiting for timeout Fast recovery: Try to avoid slowstart. used only on triple duplicate event Single packet drop: not too bad

52 TCP Reno cwnd Trace triple duplicate Ack CA CA Slow Start CA
Sl.Start

53 TCP congestion control: bandwidth probing
“probing for bandwidth”: increase transmission rate on receipt of ACK, until eventually loss occurs, then decrease transmission rate continue to increase on ACK, decrease on loss (since available bandwidth is changing, depending on other connections in network) ACKs being received, so increase rate X loss, so decrease rate X X X TCP’s “sawtooth” behavior sending rate X time Q: how fast to increase/decrease? details to follow Transport Layer

54 TCP Congestion Control: details
sender limits rate by limiting number of unACKed bytes “in pipeline”: cwnd: differs from rwnd (how, why?) sender limited by min(cwnd,rwnd) roughly, cwnd is dynamic, function of perceived network congestion LastByteSent-LastByteAcked  cwnd cwnd bytes ACK(s) rate = cwnd RTT bytes/sec RTT Transport Layer

55 TCP Congestion Control: more details
segment loss event: reducing cwnd timeout: no response from receiver cut cwnd to 1 3 duplicate ACKs: at least some segments getting through (recall fast retransmit) cut cwnd in half, less aggressively than on timeout ACK received: increase cwnd slowstart phase: start low (cwnd=MSS) increase cwnd exponentially fast (despite name) used: at connection start, or following timeout congestion avoidance: increase cwnd linearly Transport Layer

56 TCP Slow Start Host A Host B when connection begins, cwnd = 1 MSS
example: MSS = 500 bytes & RTT = 200 msec initial rate = 20 kbps available bandwidth may be >> MSS/RTT desirable to quickly ramp up to respectable rate increase rate exponentially until first loss event or when threshold reached double cwnd every RTT done by incrementing cwnd by 1 for every ACK received Host A Host B one segment RTT two segments four segments time Transport Layer

57 TCP: congestion avoidance
AIMD when cwnd > ssthresh grow cwnd linearly increase cwnd by 1 MSS per RTT approach possible congestion slower than in slowstart implementation: cwnd = cwnd + MSS^2/cwnd for each ACK received ACKs: increase cwnd by 1 MSS per RTT: additive increase loss: cut cwnd in half (non-timeout-detected loss ): multiplicative decrease true in macro picture may require Slow Start first to grow up to this AIMD: Additive Increase Multiplicative Decrease Transport Layer

58 TCP congestion control FSM: overview
slow start congestion avoidance cwnd > ssthresh loss: timeout loss: timeout loss: timeout new ACK loss: 3dupACK fast recovery loss: 3dupACK Transport Layer

59 TCP congestion control FSM: details
cwnd = cwnd + MSS (MSS/cwnd) dupACKcount = 0 transmit new segment(s),as allowed new ACK . dupACKcount++ duplicate ACK cwnd = cwnd+MSS dupACKcount = 0 transmit new segment(s),as allowed new ACK L cwnd = 1 MSS ssthresh = 64 KB dupACKcount = 0 slow start congestion avoidance cwnd > ssthresh L timeout ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment dupACKcount++ duplicate ACK timeout ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment timeout ssthresh = cwnd/2 cwnd = 1 MSS dupACKcount = 0 retransmit missing segment cwnd = ssthresh dupACKcount = 0 New ACK ssthresh= cwnd/2 cwnd = ssthresh + 3 MSS retransmit missing segment dupACKcount == 3 ssthresh= cwnd/2 cwnd = ssthresh + 3 MSS retransmit missing segment dupACKcount == 3 fast recovery cwnd = cwnd + MSS transmit new segment(s), as allowed duplicate ACK Transport Layer

60 Popular “flavors” of TCP
TCP Reno ssthresh cwnd window size (in segments) ssthresh TCP Tahoe Transmission round Transport Layer

61 Summary: TCP Congestion Control
when cwnd < ssthresh, sender in slow-start phase, window grows exponentially. when cwnd >= ssthresh, sender is in congestion- avoidance phase, window grows linearly. when triple duplicate ACK occurs, ssthresh set to cwnd/2, cwnd set to ~ ssthresh when timeout occurs, ssthresh set to cwnd/2, cwnd set to 1 MSS. Transport Layer

62 TCP throughput Q: what’s average throughout of TCP as function of window size, RTT? ignoring slow start let W be window size when loss occurs. when window is W, throughput is W/RTT just after loss, window drops to W/2, throughput to W/2RTT. average throughout: .75 W/RTT Transport Layer

63 TCP Fairness fairness goal: if K TCP sessions share same bottleneck link of bandwidth R, each should have average rate of R/K TCP connection 1 bottleneck router capacity R TCP connection 2 Transport Layer

64 Why is TCP fair? Two competing sessions: (Tahoe, Slow Start ignored)
Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally R equal bandwidth share y = x+(b-a)/4 Connection 2 throughput loss: decrease window by factor of 2 congestion avoidance: additive increase loss: decrease window by factor of 2 (a/2+t1/2+t,b/2+t1/2+t) => y = x+(b-a)/2 congestion avoidance: additive increase (a+t,b+t) => y = x+(b-a) (a,b) Connection 1 throughput R Transport Layer

65 Fairness (more) Fairness and parallel TCP connections Fairness and UDP
nothing prevents app from opening parallel connections between 2 hosts. web browsers do this example: link of rate R supporting already 9 connections; new app asks for 1 TCP, gets rate R/10 new app asks for 11 TCPs, gets R/2 !! Fairness and UDP multimedia apps often do not use TCP do not want rate throttled by congestion control instead use UDP: pump audio/video at constant rate, tolerate packet loss Transport Layer

66 Exercise MSS = 1000 Only one event per row Transport Layer


Download ppt "Ch. 7 : Internet Transport Protocols."

Similar presentations


Ads by Google