Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 381 Introduction to computer networks

Similar presentations


Presentation on theme: "CS 381 Introduction to computer networks"— Presentation transcript:

1 CS 381 Introduction to computer networks
Chapter 1 - Lecture 2 1/31/2017

2 The Network Edge End systems (hosts): Reasons for this?
All Internet applications are implemented at the end systems. HTTP, FTP, SSH, SCP, DNS, SMTP Reasons for this? Introduction

3 The Network Core Mesh of interconnected routers
The fundamental question: how is data transferred through network? Compare telephone network and Internet Telephone network employs “circuit switching” resources necessary to make call are reserved for duration of communication Introduction

4 Packet Switched Networks
Packet switches generally use technique called store-and- forward: All of the bits of a packet must have arrived at input link before switch will put first bit of packet onto output link. In other words, no part of a data packet can be on different links at the same time. Introduces store-and-forward delay. Time needed for packet to move from the link into the packet switch Time needed to process outgoing link from packet switch Time needed to push packing onto outgoing link

5 Packet Switched Networks
Switches have multiple input links and output links Input/Output queues are associated with each link in the packet switch (Nodal Delay) Output links can hold packets that arrived when switch was transmitting another packet. Input links hold bits of a packet until all bits have arrived.

6 Packet Switched Networks
Queuing Delay: The length of time packets spends in the input/output queue of a router/switch Time in queue determined by multiple factors Highly variable Two delays in packet switched networks introduced: Store-and-forward delays Queuing delays More delays to come! Easier now to visualize the idea that the internet is a “best effort” service.

7 Packet Switch Delays and Loss
If a packet switch is receiving more packets than it can handle, buffers will begin to overflow Dropped packets. Buffers on the routers/switches are just cache memory, which have finite storage capacities The application has no knowledge of this and is not informed. It learns of the dropped packet(s) only after they have not arrived. Even then, it doesn’t know if it was dropped or is being delayed in the network. Is this a problem? Of course! Solutions? Flow Control Congestion Control

8 Packet Switching: Statistical Multiplexing
100 Mb/s Ethernet C A statistical multiplexing 1.5 Mb/s B queue of packets waiting for output link D E Sequence of A & B packets does not have fixed pattern, bandwidth shared on demand statistical multiplexing. Introduction

9 Packet-switching: store-and-forward
L R R R store and forward: entire packet must arrive at router before it can be transmitted on next link Let R be the transmission rate in bits per second Let L be the number of bits in a packet Transmission delay = L/R seconds Introduction 1-9 9

10 Packet-switching: store-and-forward
L R R R Transmission delay of link is L/R seconds. Takes L/R seconds to transmit (push out) packet of L bits on to link at R bps Assuming all links have the same transmission rate, and ignoring propagation delay total transmission delay = NumberOfLinks*L/R Example: L = 7.5 Mb R = 1.5 Mbps Total transmission delay = 3(7.5) / 1.5 = 15 seconds 5 seconds per link, 3 links Introduction 1-10 10

11 Packet-switching: store-and-forward
L R R R R Another Example: L = 8 Mb R = 2 Mbps Total transmission delay = 4(8) / 2 = 16 seconds 4 seconds per link, 4 links Introduction 1-11 11

12 Compare Packet Switching and Circuit Switching
Problems with circuit switching: Likely to be times when circuit is not being used but is still dedicated to the communication No one else can use resource even though it is idle. More expensive and more difficult to implement than packet switching Think about how dedicated bandwidth services would be implemented. Each communication stream would require link setup

13 Packet Switching (can be) More Efficient
Assume 10 circuits on one megabit/second TDM link 10 slots per second (.1 seconds per slot) Each circuit can transmit at 1 megabit/second for 1/10th of a second for effective transmission rate of 100 kbps. One user generates 1000 packets of 1000 bits = 1 megabit How long to transmit data?

14 Packet Switching (can be) More Efficient
How long to transmit data? 10 seconds: transmit 100 kbps during its time slot, then waits for next opportunity to send …. Takes 10 seconds even if no other communication is currently active

15 Packet Switching Can allow more users to use network.
Consider 1 Mb/s link Assume each user: 100 kb/s when “active” active 10% of time circuit-switching: 10 users packet switching: with 35 users, probability less than 10 active at same time is quite low 99.99% of time less than 10 active users Since this is the case, a larger number of users can connect to the network Increasing efficiency without increasing bandwidth. N users 1 Mbps link Introduction

16 Packet switching versus circuit switching
Is packet switching a “slam dunk winner?” Packet switching provides resource sharing and is simpler, no call setup Great for so called “bursty” data (I.e., not a continuous stream but transmitting data from time to time) Assumes low probability of users being active at exactly the same time. However there are problems with packet switching networks Varying delays Difficult to provide constant path resources congestion: packet delay and loss protocols needed for reliable data transfer, congestion control Introduction

17 Packet switching versus circuit switching
Q: How to provide circuit-like behavior? bandwidth guarantees needed for audio/video apps still an unsolved problem (chapter 7) Introduction 1-17 17

18 Internet structure: network of networks
Roughly hierarchical At center: “tier-1” ISPs (e.g., Verizon, Sprint, AT&T, Cable and Wireless), national/international coverage treat each other as equals Tier 1 ISP Tier-1 providers interconnect (peer) privately Tier 1 ISP Tier 1 ISP Introduction

19 Internet structure: network of networks
Tier 1 ISP Network that can reach every other network on the Internet without purchasing IP transit Tier 2 ISP Network that peers with one or more tier 1 ISPs, possibly other tier 2 ISPs Regional ISP Tier 3 ISP A network that solely purchases transit from other networks to participate in the Internet Local ISP Introduction

20 Internet structure: network of networks
A packet passes through many networks! local ISP Tier 3 ISP local ISP local ISP local ISP Tier-2 ISP Tier 1 ISP Tier 1 ISP Tier 1 ISP local ISP local ISP local ISP local ISP Introduction

21 Chapter 1: roadmap 1.1 What is the Internet? 1.2 Network edge
end systems, access networks, links 1.3 Network core circuit switching, packet switching, network structure 1.4 Delay, loss and throughput in packet-switched networks 1.5 Protocol layers, service models 1.6 Networks under attack: security 1.7 History Introduction

22 How do loss and delay occur?
packets queue in router buffers packet arrival rate to link exceeds output link capacity packets queue, wait for turn packet being transmitted A free (available) buffers: arriving packets dropped (loss) if no free buffers packets queueing (delay) B Introduction

23 Four sources of packet delay
1. nodal processing: check bit errors determine output link 2. queueing time waiting to start transmission depends on congestion level of router A B propagation transmission nodal processing queueing Introduction

24 Delay in packet-switched networks
3. Transmission delay: R=link bandwidth (bps) L=packet length (bits) time to send bits into link = L/R 4. Propagation delay: d = length of physical link s = propagation speed in medium (~2x108 m/sec) propagation delay = d/s A B propagation transmission nodal processing queueing Introduction

25 Nodal delay Nodal Delay: dproc = processing delay
Delays experienced at each network switch/router along a path dproc = processing delay Packet header examination time typically a few microsecs or less dqueue = queuing delay Input/output queues Input Queue delay: time waiting at node before processing Output Queue delay: time waiting at node before transmission onto link depends on congestion, node CPU dtrans = transmission delay Time needed to “push” packet onto link = L/R, significant for low-speed links dprop = propagation delay Time needed to traverse a link (wired or wireless) A few microsecs to hundreds of msecs Introduction

26 Queuing delay (revisited)
Queuing delay is most highly variable aspect of delay Important to get a feel for how queuing delay happens R=link bandwidth (bps) L=packet length (bits) a=average packet arrival rate (number of packets per second) Assume all packets are of length L and that packets arrive at a steady rate. Define: Traffic Intensity (I) = La/R La  (mean) number of bits per second arriving at the router R is the link transmission rate What does it mean when I > 1 Introduction

27 Queuing delay (revisited)
What does it mean when Traffic Intensity > 1? Router is receiving bits at a faster rate than it can process them Queuing delay increases as router cache begins to store packets for processing. Can eventually lead to packet loss. Queue full, nowhere for incoming packets, dropped Introduction 1-27 27

28 Queuing delay (revisited)
What does it mean when Traffic Intensity is close to 0? The number of bits arriving at the router is much lower than the rate at which they can be transmitted on the link. Would not expect much queuing at all Router can handle all of the incoming traffic Introduction 1-28 28

29 Queuing delay (revisited)
What does it mean when Traffic Intensity is close to 1? The number of bits arriving at the router is equal to the number of bits that the router can handle. If traffic is at all bursty (multiple packets arriving at same time) then there are times when router is overwhelmed. Queuing may develop and increase over time. Introduction 1-29 29

30 Queueing delay (revisited)
R=link bandwidth (bps) L=packet length (bits) a=average packet arrival rate traffic intensity = La/R La/R ~ 0: average queuing delay small La/R -> 1: delays become large La/R > 1: more “work” arriving than can be serviced, average delay infinite! Introduction 1-30 30

31 “Real” Internet delays and routes
What does “real” Internet delay & loss look like? Traceroute program: Sends UDP packet to each router on the path from source to destination. Each router on the path sends back a “special” message to source Source tracks the time from when it sent the UDP packet and when it receives the message from the router. Actually sends three UDP packets to each router to provide three different timings. Introduction

32 How Traceroute Works Recall that TCP and UDP are the two transport mechanisms in the Internet When a packet is put onto the network by either protocol, the packet has a header Provides information such as the source/destination addresses of the packet.

33 How Traceroute Works One field of the header is Time To Live (TTL) which specifies the maximum number of routers that the packet can traverse TTL is decremented by each router it goes through When TTL reaches 0, the router sends an Internet Control Message Protocol (ICMP) message to the source informing it that the TTL has expired.

34 Packet loss queue (aka buffer) preceding link in buffer has finite capacity packet arriving to full queue dropped (aka lost) lost packet may be retransmitted by previous node, by source end system, or not at all buffer (waiting area) packet being transmitted A B packet arriving to full buffer is lost Introduction

35 Throughput throughput:
rate (bits/time unit) at which bits transferred between sender/receiver instantaneous: rate at given point in time average: rate over longer period of time Consider transferring a file from host A to host B Assume the file is F bits and it takes T seconds to complete transfer. What is the average throughput? F/T Introduction 1-35

36 Throughput throughput: rate (bits/time unit) at which bits transferred between sender/receiver instantaneous: rate at given point in time average: rate over longer period of time What is average throughput? Minimum of Rs and Rc server, with file of F bits to send to client link capacity Rs bits/sec Rc bits/sec pipe that can carry fluid at rate Rs bits/sec) Rc bits/sec) server sends bits (fluid) into pipe Introduction 1-36

37 Throughput (more) Rs < Rc What is average end-end throughput?
Rc bits/sec Rs bits/sec Rs > Rc What is average end-end throughput? Rs bits/sec Rc bits/sec Link on end-end path that constrains end-end throughput bottleneck link Introduction

38 Throughput: Internet scenario
Per-connection end-end throughput: min(Rc,Rs,R/10) In practice: Rc or Rs is often bottleneck Rs Rs Rs R Rc Rc Rc 10 connections (fairly) share backbone bottleneck link R bits/sec Introduction


Download ppt "CS 381 Introduction to computer networks"

Similar presentations


Ads by Google