Presentation is loading. Please wait.

Presentation is loading. Please wait.

MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER ATM-UBR By Dr. ISHTIAQ AHMED CH.

Similar presentations


Presentation on theme: "MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER ATM-UBR By Dr. ISHTIAQ AHMED CH."— Presentation transcript:

1 MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER ATM-UBR By Dr. ISHTIAQ AHMED CH.

2 OVERVIEW  Introduction  Problem Definition.  Previous related work.  Unique experimental design.  Analysis of different TCP implementations that proves that these implementations do not utilize the available bandwidth efficiently.  Based on our analysis we proposed Dynamic Granularity Control algorithm for TCP.  Conclusions.

3 INTRODUCTION Management of Multimedia communications requires  Efficient resource management.  Maximum utilization of bandwidth allocated.  Providing QoS parameters.

4 Tools subjected to Multimedia Communications Among the tools for Multimedia Communications, the ATM Networks and TCP/IP protocol were selected.

5 ATM (Asynchronous Transfer Mode) Multi-Service Traffic Categories - CBR (Constant Bit Rate) - UBR (Unspecified Bit Rate) -Promising Traffic with Quality of Service (QoS) Academic Network & Easy to Use High-Speed Network Technology Features

6 TCP/IP Most Widely Used Protocol in the Internet. It has lot of research potential to meet the network communication requirements. Source code and helping material are easily available.

7 PROBLEM DEFINITION ATM switch with buffer size 3K or 4K cells ATM switch with buffer size 1K cells or 2K cells

8 PROBLEM DEFINITION Multimedia Communications suffers from three major problems 1.ATM switch buffer overflow. 2.Loss of Protocol Data Units by the Protocol being used. 3.Fairness among multiple TCP connections.

9 My Research Problem Research is Dealing with the two above mentioned problems such that: Avoiding Congestion in the ATM network. Efficient utilization of bandwidth allocated. Fairness among multiple TCP connections.

10 Transmission Control Protocol Different Implementations of TCP - TCP Tahoe -TCP Reno -TCP NewReno -TCP SACK Congestion Control algorithms of TCP - Slow-Start -Congestion Avoidance -Fast Retransmit -Fast Recovery

11 Previous Related Work TCP/IP over ATM Jacobson (1988) TCP Tahoe is added with Slow-Start, Congestion Avoidance and Fast Retransmit algorithms to avoid loss of data. Jacobson (1990) TCP Reno modified the Fast Retransmit algorithm of TCP Tahoe and added Fast Recovery algorithm. Gunningberg (1994) The large MTU size of ATM causes throughput Deadlock.

12 Previous Related Work TCP/IP over ATM Romanow (1995) Cells of Large packet when lost at ATM level heavily effects TCP throughput. This gives rise to cell discard strategies like PPD (Partial Packet Discard) and EPD (Early Packet Discard). Larger the MTU size smaller will be the TCP Throughput Hoe (1996) The slow-start algorithm ends up pumping too much data. Fast Retransmit algorithm may recover only one of the packet losses.

13 Previous Related Work TCP/IP over ATM Floyd (1996) TCP Reno implementation is modified to recover multiple segment losses. The implementation is named as TCP NewReno. Mathis (1996) The Fast Retransmit and Fast Recovery algorithms are modified using Selective Acknowledgment Options. The new TCP version is known as TCP SACK.

14 Problems of TCP/IP over ATM SUMMARY of Related Research Work oSegment losses badly effect the throughput of TCP over congested ATM networks. oFast Retransmit and Fast Recovery algorithms of Reno TCP are unable to recover multiple segment losses in the same window of data. oNewReno TCP and Linux TCP algorithms are supposed to recover these segment losses but……..

15 Previous Research on TCP/IP over ATM is related to: Multiple UBR streams from different sources are contending at the same output port of the ATM switch. Major part of the related research is based on simulated studies.

16 The ATM Network being Congested due to:  CBR flow, which has absolute precedence, and the TCP flow on UBR sharing the same output port in the ATM switch.  The cell buffer size in the ATM switch for UBR meets the minimum requirement. Unique Experimental Design

17 Fujitsu EA1550 CBR TCP Traffic over UBR Netperf Tcpdump FreeBSD 3.2-R TCP Throughput Analysis depending on 4 parameters Cell Loss A B C Traffic Generator CBR Streams Switch Buffer MTU Size Socket Buffer Size Unique Experimental Design

18 My Research Contribution  Throughput measurement and analysis of TCP over congested ATM under variety of network parameters.  Throughput evaluation and analysis of several TCP implementations.  Proposed a new congestion control scheme for TCP to avoid congestion in the ATM network and to improve the throughput of TCP.

19 Performance Analysis of Linux TCP

20 Congestion Control Algorithms of Linux TCP oSlow-Start algorithm oCongestion Avoidance oFast Retransmit algorithm. oFast Recovery algorithm.

21 Slow-Start Algorithm 1 St segment 2nd segment Acknowledgment (ACK) SenderReceiver ATM switch

22 Congestion Avoidance Algorithm one segment per RTT 2nd segment after RTT Acknowledgment (ACK) SenderReceiver ATM switch

23 one segment per RTT 2nd segment after RTT Acknowledgment (ACK) SenderReceiver ATM switch Duplicate ACK Fast Retransmit and Fast Recovery Algorithms

24 Throughput Results CBR Stream = 100[Mbps]Socket Buffer Size=64Kbytes 0 20 40 60 80 100 050100150200 Effective Throughput[Mbps] UBR Switch Buffer Size[Kbytes] RENO TCP MTU=9180bytes Linux TCP MTU=9180bytes Linux TCP MTU=1500bytes Linux TCP MTU=512bytes

25 Throughput Results Switch Buffer Size = 53[Kbytes] Socket Buffer = 64[Kbytes]MTU=9180bytes CBR Stream Pressure TCP Throughput over UBR 0 20 40 60 80 100 120 140 020406080100120140 Throughput[Mbps] CBR Pressure[Mbps] Reno TCP Linux TCP

26 Segments Acknowledged ( No. of bytes) CBR=100Mbps MTU=9180bytes Buffer Size=53Kbytes 0 2000 4000 6000 8000 10000 0246810 Packets Acknowledged[Kbytes] Time[sec] Linux TCP E.TP=7.06 Mbps

27 Analysis of Linux TCP  TCP throughput is less than 20% of the available bandwidth and varies 14 to 16%.  Retransmission time outs are less than Reno TCP.  Linux TCP will be bad in connection sensitive applications due to expiry of its retransmission timer.  Retransmission timer expires due to the making a decision what to send.  Retransmission timeouts and FRR processes consumed almost more than 50% of the total time.  If MTU size is large then congestion occurs soon.

28 Proposed Dynamic Granularity Control (DGC) algorithm for TCP A more conservative Jacobson’s congestion avoidance scheme is applied by reducing the MSS size. Step 1: Congestion Avoidance  Decrease MSS to 1460 bytes if MSS >1460 bytes.  If MSS =1460 bytes, decrease MSS to 512 bytes.

29 Proposed DGC Algorithm for TCP Fast retransmit machine (FRM) consist of following stages.  Fast retransmission.  Fast recovery.  TCP re-ordering.  Segment loss. Step 2:FRM.  Reducing MSS to 512 bytes under FRM events.

30 Implementation of DGC algorithm  DGC is implemented on Linux kernel 2.4.0-test10 with ATM-0.78 distribution.  Sender side implementation.

31 Results and Discussions 0 20 40 60 80 100 120 050100150200 Effective Throughput[Mbps] Switch Buffer Size[Kb] DGC TCP MTU=9180 bytes Linux TCP MTU=9180 bytes Linux TCP MTU=512 bytes CBR Stream = 100[Mbps]Window Size=64Kbytes Available bandwidth

32 Results and Discussions 0 20 40 60 80 100 120 140 160 020406080100120140160 Throughput[Mbps] CBR Pressure[Mbps] DGC TCP MTU=9180 bytes Linux TCP MTU=9180 bytes Linux TCP MTU=512 bytes Available bandwidth Switch buffer size = 53[Kbytes] Window size=64Kbytes

33 Segments Acked (No. of Bytes) 0 10000 20000 30000 40000 50000 60000 0246810 Number of Segments Acked [Kbytes] Time[sec] DGC TCP E.TP=41.71 Mbps Linux TCP E.TP=7.06 Mbps Switch buffer size = 53[Kbytes] MTU=9180bytes

34 TWO UBR STREAMS without any External CBR Pressure 0 20 40 60 80 100 050100150200 Effective Throughput[Mbps] Switch Buffer Size[K bytes] UBR Stream1 CBR=100[Mbps] UBR Stream2 CBR=100[Mbps] Single UBR Stream [Mbps]

35 Multiple Streams of Linux TCP under CBR pressure 0 20 40 60 80 100 050100150200 Effective Throughput[Mbps] Switch Buffer Size[K bytes] Maximum Available Bandwidth [Mbps] Linux TCP UBR1 CBR=100[Mbps] Linux TCP UBR2 CBR=100[Mbps]

36 Linux TCP and DGC TCP Streams 0 20 40 60 80 100 050100150200 Effective Throughput[Mbps] Switch Buffer Size[K bytes] DGC TCP UBR Flow1 CBR=[100Mbps] Linux TCP UBR Flow2 CBR=100[Mbps] Maximum Available Bandwidth [Mbps]

37 DGC and Linux TCP under CBR Pressure 0 20 40 60 80 100 120 140 020406080100120140 Throughput[Mbps] CBR Pressure[Mbps] DGC TCP UBR Flow1 Linux TCP UBR Flow2 Total Additive Throughput [Mbps]

38 CONCLUSIONS  Proposed TCP DGC algorithm used more than 98% of the available bandwidth.  No Retransmission timeout occurs and hence synchronization effect is minimized.  Fairness is thoroughly better than the other available flavors of TCP.

39 Final Concluding Remarks  Analysis of TCP Reno 1.Slow-start algorithm pumps too much in the network. 2.If MTU size is large then throughput will be better. 3.Throughput of TCP is less than 2% of the available bandwidth during heavy congestion in the network. 4.The retransmission timeout occurs too frequently producing TCP throughput deadlock. 5.Fast Retransmit and Fast Recovery algorithms are unable to recover multiple segment losses.

40 Final Concluding Remarks  Analysis of Linux TCP 1.Throughput of Linux TCP is improved as compared to TCP Reno but still less than 20% of the available bandwidth. 2.If MTU size is large then congestion state in the network is achieved soon. 3.More than 50% of the total is consumed to recover a segment loss. 4.Retransmission timeout still occurs, therefore Linux TCP will be bad in connection sensitive applications.

41 Final Concluding Remarks Analysis of proposed TCP DGC algorithm 1.Almost all the available bandwidth is utilized. 2.The idea is equally applicable to other communication protocols facing congestion problem in the network. 3.DGC TCP may not be useful over the Internet in certain cases.

42 Future Directions High-Speed TCP http://www.icir.org/floyd/hstcp.html Fast TCP http://netlab.caltech.edu/FAST/ TCP Performance Tuning Page http://www.psc.edu/networking/projects/

43 Future Directions Performance analysis of multiple TCP connections, fairness, and buffer requirements over Gigabit Networks. Multi-homing Multi-streaming SCTP (Stream Control Transmission Protocol) Performance analysis of TCP flavours over Wireless Ad-hoc networks

44 Thank you very much.


Download ppt "MULTIMEDIA TRAFFIC MANAGEMENT ON TCP/IP OVER ATM-UBR By Dr. ISHTIAQ AHMED CH."

Similar presentations


Ads by Google