Presentation is loading. Please wait.

Presentation is loading. Please wait.

Yaakov (J) Stein Chief Scientist RAD Data Communications

Similar presentations


Presentation on theme: "Yaakov (J) Stein Chief Scientist RAD Data Communications"— Presentation transcript:

1 Yaakov (J) Stein Chief Scientist RAD Data Communications
SONET/SDH Yaakov (J) Stein Chief Scientist RAD Data Communications

2 Course Outline Background (analog telephony, TDM, PDH)
SONET/SDH history and motivation Architecture (path, line, section) Rates and frame structure Payloads and mappings Protection and rings VCAT and LCAS Handling packet data

3 Background

4 The PSTN circa 1900 Analog voltage travels over copper wire end-to-end
pair of copper wires “local loop” manual routing at local exchange office (CO) Analog voltage travels over copper wire end-to-end Voice signal arrives at destination severely attenuated and distorted Routing performed manually at exchanges office(s) Routing is expensive and lengthy operation Route is maintained for duration of call

5 Telephony Multiplexing
1900: 25% of telephony revenues went to copper mines standard was 18 gauge, long distance even heavier two wires per loop to combat cross-talk needed method to place multiple conversations on a single trunk 1918: “Carrier system” (FDM) 5 conversations on single trunk later extended to 12 (group) still later supergroups (60), master groups (60)), … 8 kHz 12 kHz 4 kHz 16 kHz 20 kHz f channels

6 The Digitalization of the PSTN
Shannon (Bell Labs) proved that Digital communications is always better than Analog communications and the PSTN became digital Better means More efficient use of resources (e.g. more channels on trunks) Higher voice quality (less noise, less distortion) Added features After the invention of the transistor, in 1963 T-carrier system (TDM) 1 byte per sample – 8000 samples per second T1 = 24 conversations per trunk 2 groups per cable! t timeslots

7 and switching became easier too
Analog Crossbar switch Digital Cross-connect (DXC) 1 2 4 5 6 7 8 3 t 1 2 3 4 5 processor t 2 1 5 4 3 Complexity increases rapidly with size

8 Optimized Telephony Routing
Circuit switching (route is maintained for duration of call) Route “set-up” is an expensive operation, just as it was for manual switching Today, complex least cost routing algorithms are used Call duration consists of set-up, voice and tear-down phases

9 The PSTN circa 1960 trunks circuits local loop subscriber line automatic routing through universal telephone network Analog voltages used throughout, but extensive Frequency Division Multiplexing Voice signal arrives at destination after amplification and filtering to 4 KHz Automatic routing Universal dial-tone Voltage and tone signaling Circuit switching (route is maintained for duration of call)

10 The Present PSTN PSTN Network
tandem switch last mile PSTN Network subscriber line class 5 switch class 5 switch Analog voltages and copper wire used only in “last mile”, but core designed to mimic original situation Voice signal filtered to 4 KHz at input to digital network Time Division Multiplexing of digital signals in the network Extensive use of fiber optic and wireless physical links T1/E1, PDH and SONET/SDH “synchronous” protocols Signaling can be channel/trunk associated or via separate network (SS7) Automatic routing Circuit switching (route is maintained for duration of call) Complex routing optimization algorithms (LP, Karmarkar, etc)

11 TDM timing Time Domain Multiplexing relies on all channels (timeslots)
having precisely the same timing (frequency and phase) In order to enforce this the TDM device itself frequently performs the digitization analog signals digital

12 if the inputs are already digital
If the TDM switch does not digitize the analog signals then there can be a problem the clocks used to digitize do not have identical frequencies we get byte slips! (well, actually, we can get bit slips first …) exaggerated pictorial example Numerical example: clock derived from 8000 Hz. quartz crystal typical crystal accuracy =  50 ppm So 2 crystals can differ by 100 ppm i.e. 0.8 samples / second So difference is 1 sample after 1 ¼ seconds 1 2 3 4 5 6 7 8 9 component signals TDM

13 The fix We must ensure that all the clocks have the same frequency
Every telephony network has an accurate clock called a “stratum 1” or “Primary Reference Clock” All other clocks are directly or indirectly locked to it (master – slave) A TDM receiving device can lock onto the source clock based on the incoming data (FLL, PLL) For this to work, we must ensure that the data has enough transitions (special line coding, scrambling bits, etc.) 1 transitions no transitions

14 Comparing clocks A clock is said to be isochronous (isos=equal, chronos=time) if its ticks are equally spaced in time 2 clocks are said to be synchronous (syn=same chronos=time) if they tick in time, i.e. have precisely the same frequency 2 clocks are said to be plesiochronous (plesio=near chronos=time) if they are nominally if the same frequency but are not locked

15 PDH principle If we want yet higher rates, we can mux together TDM signals (tributaries) We could demux the TDM timeslots and directly remux them but that is too complex The TDM inputs are already digital, so we must insist that the mux provide clock to all tributaries (not always possible, may already be locked to a network) OR somehow transport tributary with its own clock across a higher speed network with a different clock (without spoiling remote clock recovery)

16 PDH hierarchies * 30 * 24 * 24 1 * 4 * 4 * 4 2 * 4 * 7 * 5 3 * 4 * 6
level 64 kbps * 30 * 24 * 24 1 E1 2.048 Mbps T1 1.544 Mbps J1 1.544 Mbps * 4 * 4 * 4 2 E2 8.448 Mbps T2 6.312 Mbps J2 6.312 Mbps * 4 * 7 * 5 3 E3 Mbps T3 Mbps J3 Mbps * 4 * 6 * 3 4 E4 Mbps T4 Mbps J4 Mbps CEPT N.A. Japan

17 Framing and overhead In addition to locking on to bit-rate
we need to recognize the frame structure We identify frames by adding Frame Alignment Signal The FAS is part of the frame overhead (which also includes "C-bits", OAM, etc.) Each layer in PDH hierarchy adds its own overhead For example E1 – 2 overhead bytes per 32 bytes – overhead 6.25 % E2 – 4 E1s = Mbps out of 8.448Mbps so there is an additional Mbps = 3 % altogether 4*30*64 kbps = Mbps out of Mbps or 9.09% overhead What happens next ?

18 PDH overhead Overhead always increases with data rate ! digital signal
(Mbps) voice channels overhead percentage T1 1.544 24 0.52 % T2 6.312 96 2.66 % T3 44.736 672 3.86 % T4 4032 5.88 % E1 2.048 30 6.25 % E2 8.448 120 9.09 % E3 34.368 480 10.61 % E4 1920 11.76 % Overhead always increases with data rate !

19 OAM analog channels and 64 kbps digital channels
do not have mechanisms to check signal validity and quality thus major faults could go undetected for long periods of time hard to characterize and localize faults when reported minor defects might be unnoticed indefinitely Solution is to add mechanisms based on overhead as PDH networks evolved, more and more overhead was dedicated to Operations, Administration and Maintenance (OAM) functions including: monitoring for valid signal defect reporting alarm indication/inhibition (AIS)

20 PDH Justification In addition to FAS, PDH overhead includes
justification control (C-bits) and justification opportunity “stuffing” (R-bits) Assume the tributary bitrate is B  T Positive justification payload is expected at highest bitrate B+T if the tributary rate is actually at the maximum bitrate then all payload and R bits are filled if the tributary rate is lower than the maximum then sometimes there are not enough incoming bits so the R-bits are not filled and C-bits indicate this Negative justification payload is expected at lowest bitrate B-T if the tributary rate is actually the minimum bitrate then payload space suffices if the tributary rate is higher than the minimum then sometimes there are not enough positions to accommodate so R-bits in the overhead are used and the C-bits indicate this Positive/Negative justification payload is expected at nominal bitrate B positive or negative justification is applied as required

21 SONET/SDH motivation and history

22 First step With the disvestiture of the US Bell system a new need arose MCI and NYNEX couldn’t directly interconnect optical trunks Interexchange Carrier Compatibility Forum requested T1 to solve problem Needed multivendor/ multioperator fiber-optic communications standard Three main tasks: Optical interfaces (wavelengths, power levels, etc) proposal submitted to T1X1 (Aug 1984) T1.106 standard on single mode optical interfaces (1988) Operations (OAM) system proposal submitted to T1M1 T1.119 standard Rates, formats, definition of network elements Bellcore (Yau-Chau Ching and Rodney Boehm) proposal (Feb 1985) proposed to T1X1 term SONET was coined T1.105 standard (1988)

23 PDH limitations Rate limitations Copper interfaces defined
Need to mux/demux hierarchy of levels (hard to pull out a single timeslot) Overhead percentage increases with rate At least three different systems (Europe, NA, Japan) E 2.048, 8.448, , T 1.544, 3.152, 6.312, , , J 1.544, 3.152, 6.312, , , 397.2 So a completely new mechanism was needed

24 Idea behind SONET Synchronous Optical NETwork
Designed for optical transport (high bitrate) Direct mapping of lower levels into higher ones Carry all PDH types in one universal hierarchy ITU version = Synchronous Digital Hierarchy different terminology but interoperable Overhead doesn’t increase with rate OAM designed-in from beginning

25 Standardization ! The original Bellcore proposal:
hierarchy of signals, all multiple of basic rate (50.688) basic rate about 50 Mbps to carry DS3 payload bit-oriented mux mechanisms to carry DS1, DS2, DS3 Many other proposals were merged into 1987 draft document (rate ) In summer of 1986 CCITT express interest in cooperation needed a rate of about 150 Mbps to carry E4 wanted byte oriented mux Initial compromise attempt byte mux US wanted 13 rows * 180 columns CEPT wanted 9 rows * 270 columns Compromise! US would use basic rate of Mbps, 9 rows * 90 columns CEPT would use three times that rate Mbps, 9 rows * 270 columns

26 SONET/SDH architecture

27 Layers SONET was designed with definite layering concepts
Physical layer – optical fiber (linear or ring) when exceed fiber reach – regenerators regenerators are not mere amplifiers, regenerators use their own overhead fiber between regenerators called section (regenerator section) Line layer – link between SONET muxes (Add/Drop Multiplexers) input and output at this level are Virtual Tributaries (VCs) actually 2 layers lower order VC (for low bitrate payloads) higher order VC (for high bitrate payloads) Path layer – end-to-end path of client data (tributaries) client data (payload) may be PDH ATM packet data

28 SONET architecture ADM regenerator SONET (SDH) has at 3 layers:
Path Termination Line Section path line ADM regenerator section SONET (SDH) has at 3 layers: path – end-to-end data connection, muxes tributary signals path section there are STS paths + Virtual Tributary (VT) paths line – protected multiplexed SONET payload multiplex section section – physical link between adjacent elements regenerator section Each layer has its own overhead to support needed functionality SDH terminology

29 STS, OC, etc. A SONET signal is called a Synchronous Transport Signal
The basic STS is STS-1, all others are multiples of it - STS-N The (optical) physical layer signal corresponding to an STS-N is an OC-N SONET Optical rate STS-1 OC-1 51.84M STS-3 OC-3 155.52M STS-12 OC-12 M STS-48 OC-48 M STS-192 OC-192 M * 3 * 4 * 4 * 4

30 rates and frame structure

31 SONET / SDH frames framing Synchronous Transfer Signals are bit-signals (OC are optical) Like all TDM signals, there are framing bits at the beginning of the frame However, it is convenient to draw SONET/SDH signals as rectangles

32 SONET STS-1 frame 90 columns 9 rows
framing 9 rows Each STS-1 frame is 90 columns * 9 rows = 810 bytes There are 8000 STS-1 frames per second so each byte represents 64 kbps (each column is 576 kbps) Thus the basic STS-1 rate is Mbps

33 … SDH STM-1 frame 270 columns 9 rows
Synchronous Transport Modules are the bit-signals for SDH Each STM-1 frame is 270 columns * 9 rows = 2430 bytes There are 8000 STM-1 frames per second Thus the basic STM-1 rate is Mbps 3 times the STS-1 rate!

34 SONET/SDH rates SONET SDH columns rate STS-1 90 51.84M STS-3 STM-1 270
1080 M STS-48 STM-16 4320 M STS-192 STM-64 17280 M STS-N has 90N columns STM-M corresponds to STS-N with N = 3M SDH rates increase by factors of 4 each time STS/STM signals can carry PDH tributaries, for example: STS-1 can carry 1 T3 or 28 T1s or 1 E3 or 21 E1s STM-1 can carry 3 E3s or 63 E1s or 3 T3s or 84 T1s

35 SONET/SDH tributaries
T T3 E1 E3 E4 STS-1 STS-3 STM-1 STS-12 STM-4 STS-48 STM-16 STS-192 STM-64 E3 and T3 are carried as Higher Order Paths (HOPs) E1 and T1 are carried as Lower Order Paths (LOPs) (the numbers are for direct mapping)

36 Synchronous Payload Envelope
STS-1 frame structure 90 columns Synchronous Payload Envelope 3 rows 9 rows 9 rows 6 rows Transport Overhead TOH Section overhead is 3 rows * 3 columns = 9 bytes = 576 kbps framing, performance monitoring, management Line overhead is 6 rows * 3 columns = 18 bytes = 1152 kbps protection switching, line maintenance, mux/concat, SPE pointer SPE is 9 rows * 87 columns = 783 bytes = Mbps Similarly, STM-1 has 9 (different) columns of section+line overhead !

37 STM-1 frame structure … 270 columns
RSOH MSOH Section Overhead SOH STM-1 has 9 (different) columns of transport overhead ! RS overhead is 3 rows * 9 columns Pointer overhead is 1 row * 9 columns MS overhead is 5 rows * 9 columns SPE is 9 rows * 261 columns

38 Even higher rates 3 STS-1s can form an STS-3
9 rows 9*N columns 270*N columns 3 STS-1s can form an STS-3 4 STM-1s (STS-3s) can form an STM-4 (STS-12) 4 STM-4s (STS-12s) can form an STM-16 (STS-48) etc. for STM-N (STS-3N) The procedure is byte-interleaving

39 Byte-interleaving . . .

40 Scrambling Z-43 Xn Yn = Xn + Yn-43
SONET/SDH receivers recover clock based on incoming signal Insufficient number of 0-1 transitions causes degradation of clock performance In order to guarantee sufficient transitions, SONET/SDH employ a scrambler All data except first row of section overhead is scrambled Scrambler is 7 bit self-synchronizing X7 + X6 + 1 Scrambler is initialized with ones A short scrambler is sufficient for voice data but NOT for data which may contain long stretches of zeros When sending data an additional payload scrambler is used modern standards use 43 bit X43 + 1 run continuously on ATM payload bytes (suspended for 5 bytes of cell tax) run continuously on HDLC payloads Z-43 Xn Yn = Xn + Yn-43

41 STS-1 Overhead The STS-1 overhead consists of
J0 B1 E1 F1 D1 D2 D3 H1 H2 H3 B2 K1 K2 D4 D5 D6 D7 D8 D9 D10 D11 D12 S1 M0 E2 The STS-1 overhead consists of 3 rows of section overhead frame sync (A1, A2) section trace (J0) error control (B1) section orderwire (E1) Embedded Operations Channel (Di) 6 rows of line overhead pointer and pointer action (Hi) error control (B2) Automatic Protection Switching signaling (Ki) Data Channel (Di) Synchronization Status Message (S1) Far End Block Error (M0) line orderwire (E2) section overhead line overhead

42 STM-1 Overhead AU pointers m – media dependent A1 A2 J0 res B1 m E1 F1
(defined for SONET radio) A1 A2 J0 res B1 m E1 F1 D1 D2 D3 B2 K1 K2 D4 D5 D6 D7 D8 D9 D10 D11 D12 S1 M1 E2 RSOH AU pointers res – reserved for national use MSOH SOH

43 A1, A2, J0 (section overhead)
A1, A2 - framing bytes A1 = A2 = SONET/SDH framing always uses equal numbers of A1 and A2 bytes J0 - regenerator section trace (in early SONET - a counter called C1) enables receiver to be sure that the section connection is still OK enables identifying individual STS/STMs after muxing J0 goes through a 16 byte sequence MSBs are J0 framing (1000…00) Cs are CRC-7 of previous frame S are 15 7-bit characters section access point identifier S C7 C6 C5 C4 C3 C2 C1 1

44 B1, E1, F1, D1-3 (section overhead)
B1 – Byte Interleaved Parity-8 byte even parity of bits of bytes of previous frame after scrambling only 1 BIT-8 for multiplexed STS/STM E1 – section orderwire 64 kbps voice link for technicians from regenerator to regenerator F1 – 64 kbps link for user purposes D1 + D2 + D3 – 192 kbps messaging channel used by section termination as Embedded Operations Channel (SONET) or Data Communications Channel (SDH)

45 Pointers (line overhead)
In SONET, pointers are considered part of line overhead For STS-1, H1+H2 is the pointer, H3 is the pointer action H1+H2 indicates the offset (in bytes) from H3 to the SPE (i.e. if 0 then J1 POH byte is immediately after H3 in the row) 4 MSBs are New Data Flag, 10 LSBs are actual offset value (0 – 782) When offset=522 the STS-1 SPE is in a single STS-1 frame In all other cases the SPE straddles two frames When offset is a multiple of 87, the SPE is rectangular To compensate for clock differences we have pointer justification When negative justification H3 carries the extra data When positive justification byte after H3 is stuffing byte

46 SONET Justification If tributary rate is above nominal, negative justification is needed When less than 8 more bits than expected in buffer NDF is 0110 offset unchanged When 8 extra bits accumulate NDF is set to 1001 extra byte placed into H3 offset is decremented by 1 (byte) If tributary rate is below nominal, positive justification is needed When less than 8 fewer than expected bits in buffer When 8 missing bits byte after H3 is stuffing offset is incremented by 1 (byte) H1 H2 extra H1 H2 H3 stuff

47 B2, K1, K2, D4-D12 (line overhead)
B2 – BIP-8 of line overhead + previous envelope (w/o scrambling) N B2s for muxed STM-N K1 and K2 are used for Automatic Protection Switching (see later) D4 – D12 are a 576 Kbps Data Communications Channel between multiplexers usually manufacturer specific OAM functions

48 S1, M0, E2 (line overhead) S1 – Synchronization Status Message
indicates stratum level (unknown, stratum 1, …, do not use) M0 – Far End Block Error indicates number of BIP violations detected E2 – line orderwire 64 kbps voice link for technicians from line mux to line mux

49 Payloads and Mappings

50 STS-1 HOP SPE structure We saw that the pointer the line overhead points to the STS path overhead POH (after re-arranging) POH is one column of 9 rows (9 bytes = 576 kbps)

51 STS-1 HOP 1 column of SPE is POH
87 59 30 1 column of SPE is POH 2 more (“fixed stuffing”) columns are reserved We are left with 84 columns = 756 bytes = Mbps for payload This is enough for a E3 (34.368M) or a T3 (44.736M)

52 STS-1 Path overhead 1 column of overhead for path (576 Kbps)
J1 B3 C2 G1 F2 H4 F3 K3 N1 POH 1 column of overhead for path (576 Kbps) POH is responsible for path type identification path performance monitoring status (including of mapped payloads) virtual concatenation path protection trace

53 J1, B3, C2 (path overhead) J1 – path trace enables receiver to be sure
C2 (hex) Payload type 00 unequipped 01 nonspecific 02 LOP (TUG) 04 E3/T3 12 E4 13 ATM 16 PoS – RFC 1662 18 LAPS X.85 1A 10G Ethernet 1B GFP CF PoS - RFC1619 J1 – path trace enables receiver to be sure that the path connection is still OK B3 – BIP-8 even bit parity of bytes (without scrambling) of previous payload C2 – path signal label identifies the payload type (examples in table)

54 G1, F2, H4, F3, K3, N1 (path overhead)
G1 – path status conveys status and performance back to originator 4 MSBs are path FEBE, 1 bit RDI, 3 unused F2 and F3 – user specific communications H4 – used for LOP multiframe sync and VCAT (see later) K3 (4 MSBs) – path APS N1 – Tandem Connection Monitoring Messaging channel for tandem connections

55 LOP 7 VTGs 1 30 59 87 1 2 3 4 5 6 7 To carry lower rate payloads, divide the 84 available columns into 7 * 12 interleaved columns, i.e. 7 Virtual Tributary (VT) Groups VT group is 12 columns of 9 rows, i.e. 108 bytes or Mbps VT group is composed of VT(s) there are different types of VT in order to carry different types of payload all VTs in VT group must be of the same type (no mixing) but different VT groups in same SPE can have different VT types A VT can have 3, 4, 6 or 12 columns

56 SONET/SDH : VT/VC types
VT/STS VC column rate payload VT 1.5 VC-11 DS1 (1.544) VT 2 VC-12 E (2.048) VT 3 DS1C (3.152) VT 6 VC-2 DS2 (6.312) STS-1 VC-3 48.384 E3 (34.368) DS3 (44.736) STS-3c VC-4 E4 ( ) 4 per group 3 per group LOP 2 per group 1 per group HOP standard PDH rates map efficiently into SONET/SDH !

57 LO Path overhead LOP OH is responsible for timing, PM, REI, …
LO Path APS signaling is 4 MSBs of byte K4 H4=XXXXXX00 V1 pointer 125 msec V5 J2 N2 K4 VC11 – 25B VC12 – 34B H4=XXXXXX01 V2 pointer 500 msec V3 pointer H4=XXXXXX10 H4=XXXXXX11 V4 pointer VC11 – 27B VC12 – 36B

58 Payload capacity VT1.5/VC-11 has 3 columns = 27 bytes = 1.728 Mbps
but 2 bytes are used for overhead (V1/V2/V3/V4 and V5/J2/N2/K4) so actually only 25 bytes = 1.6 Mbps are available Similarly VT2/VC-12 has 4 columns = 36 bytes = Mbps but 2 bytes are used for overhead So actually only 34 bytes = Mbps are available

59 LOP overhead V5 consists of BIP (2b) REI (1b) RFI (1b)
Signal label (3b) (uneq, async, bit-sync, byte-sync, test, AIS) RDI (1b) J2 is path trace N2 is the network operator byte may be used for LOP tandem connection monitoring (LO-TCM) K4 is for LO VCAT and LO APS

60 SDH Containers Tributary payloads are not placed directly into SDH
Payloads are placed (adapted) into containers The containers are made into virtual containers (by adding POH) Next, the pointer is used – the pointer + VC is a TU or AU Tributary Unit adapts a lower order VC to high order VC Administrative Unit adapts higher order VC to SDH TUs and AUs are grouped together until they are big enough We finally get an Administrative Unit Group To the AUG we add SOH to make the STM frame

61 Formally … C-n n = 11, 12, 2, 3, 4 VC-n = POH + C-n
TU-n = pointer + VC-n (n=11, 12, 2, 3) AU-n = pointer + VC-n (n=3,4) TUG = N * TU-n AUG = N * AU-n STM-N = SOH + AUG

62 Multiplexing An AUG may contain a VC-4 with an E4
or it may contain 3 AU-3s each with a VC-3s with an E3 In the latter case, the AU pointer points to the AUG and inside the AUG are 3 pointers to the AU-3s J1 B3 C2 G1 F2 H4 F3 K3 N1 H1 H2 H3

63 More multiplexing Similarly, we can hierarchically build complex structures Lower rate STMs can be combined into higher rate STMs AUGs can be combined into STMs AUs can be combined into AUGs TUGs can be combined into high order VCs Lower rate TUs can be combined into TUGs etc. But only certain combinations are allowed by standards

64 All SDH mappings … * 3 * 3 *7 *7 * 3 * 4 AUG STM-N AUG AU-4 VC-4 C-4
E M STM-N AUG AU-4 VC-4 C-4 ATM M * 3 AUG * 3 TUG-3 TU-3 VC-3 E M STM-0 AU-3 VC-3 C3 T M *7 ATM M *7 TUG-2 TU-2 VC-2 C2 T M ATM 6.874M * 3 E M TU-12 VC-12 C12 ATM M * 4 TU-11 VC-11 C11 T M ATM 1.6 M

65 All SONET mappings *N *7 * 3 * 4 STS-N STS-3c STS-3 SPE STS-1
E M ATM M *N E M STS-1 STS-1 SPE T M ATM M *7 VTG VT6 SPE T M VT6 ATM 6.874M * 3 E M VT-2 VT2 SPE ATM M pointer processing * 4 VT1.5 VT1.5 SPE T M ATM 1.6 M

66 Tributary mapping types
When mapping tributaries into VCs, PDH-like bit-stuffing is used For E1 and T1 there are several options Asynchronous mapping (framing-agnostic) Bit synchronous mapping Byte synchronous mapping (time-slot aligned) E4 into VC-4, E3/T3 into VC-3 are always asynchronous T1 into VC-11 may be any of the 3 (in byte synchronous the framing bit is placed in the VC overhead) E1 into VC-12 may be asynchronous or byte synchronous

67 WAN-PHY (10 GbE in STM-64) 10GBASE-W 802.3-2005 Clause 50
There is a special case where the bit-rates work out relatively well GbE 10GBASE-R (64B/66B coding) can be directly mapped into a STM-64 (with contiguous concatenation - see later) without need for GFP MAC creates "stretched InterPacket Gap" to compensate for rate being < 10G This is the fastest connection commonly used for Internet traffic Complication: SDH clock accuracy is 4.6 ppm, GbE accuracy is 20 ppm 64*(270-9) = columns J1 63 columns of fixed stuff

68 Protection and Rings

69 What is protection ? SONET/SDH need to be highly reliable (five nines)
Down-time should be minimal (less than 50 msec) So systems must repair themselves (no time for manual intervention) Upon detection of a failure (dLOS, dLOF, high BER) the network must reroute traffic (protection switching) from working channel to protection channel The Network Element that detects the failure (tail-end NE) initiates the protection switching The head-end NE must change forwarding or to send duplicate traffic Protection switching is unidirectional Protection switching may be revertive (automatically revert to working channel) working channel protection channel head-end NE tail-end NE

70 How does it work? Head-end and tail-end NEs have bridges (muxes)
Head-end and tail-end NEs maintain bidirectional signaling channel Signaling is contained in K1 and K2 bytes of protection channel K1 – tail-end status and requests K2 – head-end status head-end bridge tail-end bridge working channel protection channel signaling channel

71 Linear 1+1 protection Simplest form of protection
Can be at OC-n level (different physical fibers) or at STM/VC level (called SubNetwork Connection Protection) or end-to-end path (called trail protection) Head-end bridge always sends data on both channels Tail-end chooses channel to use based on BER, dLOS, etc. No need for signaling If non-revertive there is no distinction between working and protection channels BW utilization is 50% channel A channel B

72 Linear 1:1 protection Head-end bridge usually sends data on working channel When tail-end detects failure it signals (using K1) to head-end Head-end then starts sending data over protection channel When not in use protection channel can be used for (discounted) extra traffic (pre-emptible unprotected traffic) May be at any layer (only OC-n level protects against fiber cuts) working channel extra traffic protection channel

73 Linear 1:N protection In order to save BW
we allocate 1 protection channel for every N working channels N limited to 14 4 bits in K1 byte from tail-end to head-end protection channel 1-14 working channels 15 extra traffic channel working channels protection channel

74 Two fiber vs. Four-fiber rings
Ring based protection is popular in North America (100K+ rings) Full protection against physical fiber cuts Simpler and less expensive than mesh topologies Protection at line (multiplexed section) or path layer Four-fiber rings fully redundant at OC level can support bidirectional routing at line layer Two-fiber rings support unidirectional routing at line layer 2 fibers in opposite directions

75 Unidirectional vs. bidirectional
Unidirectional routing working channel B-A same direction (e.g. clockwise) as A-B management simplicity: A-B and B-A can occupy same timeslots Inefficient: waste in ring BW and excessive delay in one direction Bidirectional routing A-B and B-1 are opposite in direction both using shortest route spatial reuse: timeslots can be reused in other sections A B A-B B-A A B B-A A-B C B-C C-B

76 UPSR vs. BLSR (MS-SPRing)
Unidirectional Bidirectional Path switching Line switching Two-fiber Four-fiber BLSR Of all the possible combinations, only a few are in use Unidirectional Path Switched Rings protects tributaries extension of 1+1 to ring topology Bidirectional Line Switched Rings (two-fiber and four-fiber versions) called Multiplex Section Shared Protection Ring in SDH simultaneously protects all tributaries in STM extension of 1:1 to ring topology

77 UPSR Working channel is in one direction
protection channel in the opposite direction All traffic is added in both directions decision as to which to use at drop point (no signaling) Normally non-revertive, so effective two diversity paths Good match for access networks 1 access resilient ring less expensive than fiber pair per customer Inefficient for core networks no spatial reuse every signal in every span in both directions node needs to continuously monitor every tributary to be dropped

78 BLSR Switch at line level – less monitoring
When failure detected tail-end NE signals head-end NE Works for unidirectional/bidirectional fiber cuts, and NE failures Two-fiber version half of OC-N capacity devoted to protection only half capacity available for traffic Four-fiber version full redundant OC-N devoted to protection twice as many NEs as compared to two-fiber Example recovery from unidirectional fiber cut

79 VCAT and LCAS

80 Concatenation Payloads that don’t fit into standard VT/VC sizes can be accommodated by concatenating of several VTs / VCs For example, 10 Mbps doesn’t fit into any VT or VC so w/o concatenation we need to put it into an STS-1 ( Mbps) the remaining Mbps can not be used We would like to be able to divide the 10 Mbps among 7 VT1.5/VC-11 s = 7 * = Mbps or 5 VT2/VC-12 s = 5 * = Mbps

81 Concatenation (cont.) There are 2 ways to concatenate X VTs or VCs:
Contiguous Concatenation (G ) HOP – STS-Nc (SONET) or VC-4-Nc (SDH) or LOP – 1-7 VC-2-Nc into a VC-3 since has to fit into SONET/SDH payload only STS-Nc : N=3 * 4n or VC-4-Nc : N=4n components transported together and in-phase requires support at intermediate network elements Virtual Concatenation (VCAT G ) HOP – STS-1-Xv or STS-Nc-Xv (SONET) or VC-3/4-Xv (SDH) or LOP – VT-1.5/2/3/6-Xv (SONET) or VC-11/12/2-Xv (SDH) HOP: X ≤ LOP: X ≤ 64 (limitation due to bits in header) payload split over multiple STSs / STMs fragments may follow different routes requires support only at path terminations requires buffering and differential delay alignment

82 Contiguous Concatenation: STS-3c
270 columns 9 rows 258 columns of SPE STS-3 9 columns of section and line overhead 3 columns of path overhead 258 columns * = Mbps 270 columns 9 rows 260 columns of SPE STS-3c 9 columns of section and line overhead 1 column of path overhead 260 columns * = Mbps

83 STS-N vs. STS-Nc Although both have raw rates of 155.520 Mbps
STS-3c has 2 more columns (1.152Mbps) available More generally, For STS-Nc gains (N-1) columns e.g. STS-12c gains 11 columns = 6.336Mbps vis a vis STS-12 STS-48c gains 47 columns = Mbps STS-192c gains 191 columns = Mbps ! However, an STS-Nc signal is not as easily separable when we want to add/drop component signals

84 Virtual Concatenation
H4 VCAT is an inverse multiplexing mechanism (round-robin) VCAT members may travel along different routes in SONET/SDH network Intermediate network elements don’t need to know about VCAT (unlike contiguous concatenation that is handled by all intermediate nodes)

85 SDH virtually concatenated VCs
Capacity (Mbps) if all members in one VC VC-11-Xv 1.600, , … X in VC-3 X ≤ 28 C ≤ in VC-4 X ≤ 64 C ≤ VC-12-Xv 2.176, , … X in VC-3 X ≤ 21 C ≤ in VC-4 X ≤ 63 C ≤ VC-2-Xv 6.784, , …, 6.784X in VC-3 X ≤ 7 C ≤ in VC-4 X ≤ 21 C ≤ So we have many permissible rates 1.600, 2.176, 3.200, 4.352, 4.800, 6.400, 6.528, 6.784, 8.000, …

86 SONET virtually concatenated VTs
Capacity (Mbps) If all members in one STS VT1.5-Xv 1.600, 3.200, … X in STS-1 X ≤ 28 C ≤ in STS-3c X ≤ 64 C ≤ VT2-Xv 2.176, 4.352, … X in STS-1 X ≤ 21 C ≤ in STS-3c X ≤ 63 C ≤ VT3-Xv 3.328, 6.656, … X in STS-1 X ≤ 14 C ≤ in STS-3c X ≤ 42 C ≤ VT6-Xv 6.784, , … 6.784X in STS-1 X ≤ 7 C ≤ in STS-3c X ≤ 21 C ≤ So we have many permissible rates 1.600, 2.176, 3.200, 3.328, 4.352, 4.800, 6.400, 6.528, 6.656, 6.784, …

87 Efficiency comparison
rate w/o VCAT efficiency with VCAT 10 STS-1 21% VT2-5v VC-12-5v 92% 100 STS-3c VC-4 67% STS-1-2v VC-3-2v 100% 1000 STS-48c VC-4-16c 42% STS-3c-7v VC-4-7v 95% Using VCAT increases efficiency to close to 100% !

88 PDH VCAT TS0 1st frame of 4 E1s VCAT overhead octet Recently ITU-T G.7043 expanded VCAT to E1,T1,E3,T3 Enables bonding of up to 16 PDH signals to support higher rates Only bonding of like PDH signals allowed (e.g. can’t mix E1s and T1s) Multiframe is always per G.704/G.832 (e.g. T1 – ESF 24 frames, E1 16 frames) 1 byte per multiframe is VCAT overhead (SQ, MFI, MST, CRC) Supports LCAS (to be discussed next) time each E1

89 PDH VCAT overhead octet
TS0 frames of an E1 VCAT overhead octet There is one VCAT overhead octet per multiframe, so net rate is T1: (24*24-1=) 575 data bytes per 3 ms. multiframe = kB/s E1: (16*30-1=) 495 data bytes per 2 ms multiframe = kB/s T3 and E3 can also be used We will show the overhead octet format later (when using LCAS, the overhead octet is called VLI)

90 Delay compensation 802.1ad Ethernet link aggregation cheats
each identifiable flow is restricted to one link doesn’t work if single high-BW flow VCAT is completely general works even with a single flow VCG members may travel over completely separate paths so the VCAT mechanism must compensate for differential delay Requirement for over ½ second compensation Must compensate to the bit level but since frames have Frame Alignment Signal the VCAT mechanism only needs to identify individual frames

91 VCAT buffering Since VCAT components may take different paths
At egress the members are no longer in the proper temporal relationship VCAT path termination function buffers members and outputs in proper order (relying on POH sequencing) (up to 512 ms of differential delay can be tolerated) VCAT defines a multiframe to enable delay compensation length of multiframe determines delay that can be accommodated H4 byte in member’s POH contains : sequence indicator (identifies component) (number of bits limits X) MFI multiframe indicator (multiframe sequencing to find differential delay)

92 Multiframes and superframes
Here is how we compensate for 512 ms of differential delay 512 ms corresponds to a superframe is 4096 TDM frames (4096*0.125m=512m) For HOP SDH VCAT and PDH VCAT (H4 byte or PDH VCAT overhead) The basic multiframe is 16 frames So we need 256 multiframes in a superframe (256*16=4096) The MultiFrame Indicator is divided into two parts: MFI1 (4 bits) appears once per frame and counts from 0 to 15 to sequence the multiframe MFI2 (8bits) appears once per multiframe and counts from 0 to 255 For LOP SDH (bit 2 of K4 byte) a 32 bit frame is built and a 5-bit MFI is dedicated 32 multiframes of 16 ms give the needed 512 ms

93 Link Capacity Adjustment Scheme
LCAS is defined in G.7042 (also numbered Y.1305) LCAS extends VCAT by allowing dynamic BW changes LCAS is a protocol for dynamic adding/removing of VCAT members hitless BW modification similar to Link Aggregation Control Protocol for Ethernet links LCAS is not a “control plane” or “management” protocol it doesn’t allocate the members still need control protocols to perform actual allocation LCAS is a “handshake” protocol it enables the path ends to negotiate the additional / deletion it guarantees that there will be no loss of data during change it can determine that a proposed member is ill suited it allows automatic removal of faulty member

94 LCAS – how does it work? LCAS is unidirectional (for symmetric BW need to perform twice) LCAS functions can be initiated by source or sink LCAS assumes that all VCG members are error-free LCAS messages are CRC protected LCAS messages are sent in advance sink processes messages after differential compensation message describes link state at time of next message receiver can switch to new configuration in time LCAS messages are in the upper nibble of H4 byte for HOS SONET/SDH K4 byte for LOS SONET/SDH VCAT overhead octet for PDH – VCAT and LCAS Information LCAS messages employ redundancy messages from source to sink are member specific messages from sink to source are replicated J1 B3 C2 G1 F2 H4 F3 K3 N1 POH

95 LCAS control messages LCAS adds fields to the basic VCAT ones
Fields in messages from source to sink: MFI MultiFrame Indicator SQ SeQuence indicator (member ID inside VCAT group) CTRL ConTRoL (IDLE, being ADDed, NORMal, End of Sequence, Do Not Use) GID Group Identification (identifies VCAT group) Fields in messages from sink to source (identical in all members): MST Member Status (1 bit for each VCG member) RS-Ack ReSequence Acknowledgement Fields in both directions CRC Cyclic Redundancy Code The precise format depends on the VCAT type (H4, K4, PDH) Note: for H4 format SQ is 8 bits, so up to 256 VCG members for PDH SQ is only 4 bits, so up to 16 VCG members

96 H4 format 16 frame multiframe MFI1 MFI2 bits 1-4 0 0 0 0 MFI2 bits 5-8
MFI2 bits 5-8 CTRL GID CRC-8 bits 1-4 CRC-8 bits 5-8 MST bits more MST bits RS-ACK SQ bits 1-4 SQ bits 5-8 reserved fields 16 frame multiframe reserved fields

97 H4 format – some comments
CRC-8 (when using K4 it is CRC-3) covers the previous 14 frames (not sync’ed on multiframe) polynomial x8 + x2 + x + 1 MST each VCG member carries the status of all members so we need 256 bits of member status this is done by muxing MST bits there are MST bits per multiframe and 32 multiframes in an MST multiframe no special sequencing, just MFI2 multiframe mod 32 GID single bit indentifier all members of VCG share the same bit cycles through LFSR sequence different VCGs use different phase offsets of sequence

98 LCAS – adding a member (1)
When more/less BW is needed, we need to add/remove VCAT members Adding/removing VCAT members first requires provisioning (management) LCAS handles member sequence numbers assignment LCAS ensures service is not disrupted Example: to add a 4th member to group “1” Initial state: Step 1: NMS provisions new member source sends CTRL=IDLE for new member sink sends MST=FAIL for new member GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=EOS GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=EOS GID=g SQ=FF CTRL=IDLE

99 LCAS – adding a member (2)
Step 2: source sends CTRL=ADD and SQ sink sends MST=OK for new member if it has been provisioned if receiving new member OK if it is able to compensate for delay otherwise it will send MST=FAIL and source reports this to NMS Step 3: source sends CTRL=EOS for new member new member starts to carry traffic sink sends RS-ACK Note 1: several new members may be added at once Note 2: removing a member is similar Source puts CTRL=IDLE for member to be removed and stops using it All member sequence numbers must be adjusted GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=EOS GID=g SQ=4 CTRL=ADD GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=NORM GID=g SQ=4 CTRL=EOS

100 LCAS – service preservation
To preserve service integrity if sink detects a failure of a VCAT member LCAS can temporarily remove member (if service can tolerate BW reduction) Example: Initial state Step 1: sink sends MST=FAIL for member 2 source sends CTRL=DNU (special treatment if EoS) and ceases to use member 2 Note: if EoS fails, renumber to ensure EoS is active Step 2: sink sends MST=OK indicating defect is cleared source returns CTRL to NORM and starts using the member again Note: if NMS decides to permanently remove the member, proceed as in previous slide GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=NORM GID=g SQ=4 CTRL=EOS GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=DNU GID=g SQ=3 CTRL=NORM GID=g SQ=4 CTRL=EOS

101 Handling Packet Data

102 Packet over SONET Currently defined in RFC2615 (PPP over SONET) obsoletes RFC1619 SONET/SDH can provide a point-to-point byte-oriented full-duplex synchronous link PPP is ideal for data transport over such a link PoS uses PPP in HDLC framing to provide a byte-oriented interface to the SONET/SDH infrastructure POH signal label (C2) indicates PoS as C2=16 (C2=CF if no scrambler)

103 PoS architecture PoS is based on PPP in HDLC framing
IP PPP HDLC SONET/SDH PoS is based on PPP in HDLC framing Since SONET/SDH is byte oriented, byte stuffing is employed A special scrambler is used to protect SONET/SDH timing PoS operates on IP packets If IP is delivered over Ethernet the Ethernet is terminated (frame removed) Ethernet must be reconstituted at the far end require routers at edges of SONET/SDH network

104 PoS Details IP packet is encapsulated in PPP default MTU is 1500 bytes
up to 64,000 bytes allowed if negotiated by PPP FCS is generated and appended PPP in HDLC framing with byte stuffing 43 bit scrambler is run over the SPE byte stream is placed octet-aligned in SPE (e.g Mbps of STM-1) HDLC frames may cross SPE boundaries

105 POS problems PoS is BW efficient but POS has its disadvantages
BW must be predetermined HDLC BW expansion and nondeterminacy BW allocation is tightly constrained by SONET/SDH capacities e.g. GBE requires a full OC-48 pipe POS requires removing the Ethernet headers so lose RPR, VLAN, 802.1p, multicasting, etc POS requires IP routers

106 LAPS In 2001 ITU-T introduced protocols for transporting packets over SDH X.85 IP over SDH using LAPS X.86 Ethernet over LAPS Built on series of ITU “LAPx” HDLC-based protocols Use ISO HDLC format Implement connectionless byte-oriented protocols over SDH X.85 is very close to (but not quite) IETF PoS

107 GFP – client specific part
GFP architecture A new approach, not based on HDLC Defined in ITU-T G.7041 (also numbered Y.1303) originally developed in T1X1 to fix ATM limitations (like ATM) uses HEC protected frames instead of HDLC Client may be PDU-oriented (Ethernet MAC, IP) or block-oriented (GBE, fiber channel) GFP frames are octet aligned contain at most 65,535 bytes consist of a header + payload area Any idle time between GFP frames is filled with GFP idle frames Ethernet IP other GFP – client specific part GFP – common part SDH OTN HDLC

108 GFP frame structure Every GFP frame has a 4-byte core header
2 byte Payload Length Indicator PLI = 01,2,3 are for control frames 2 byte core Header Error Control X16 + X12 + X5 + 1 entire core header is XOR’ed with B6AB31E0 Idle GFP frames have PLI=0 have no payload area Non-idle GFP frames have ≥ 4 bytes in payload area the payload has its own header 2 payload modes : GFP-F and GFP-T optionally protect payload with CRC-32 PLI (2B) cHEC (2B) payload header (4-64B) payload optional payload FCS (4B) core header area

109 GFP payload header GFP payload header has type (2B) type HEC (CRC-16)
extension header (0-60B) either null or linear extension (payload type muxing) extension HEC (CRC-16) type consists of Payload Type Identifier (3b) PTI=000 for client data PTI=100 for client management (OAM dLOS, dLOF) Payload FCS Indicator (1b) PFI=1 means there is a payload FCS Extension Header ID (3b) User Payload Identifier (8b) values for Ethernet, IP, PPP, FC, RPR, MPLS, etc. PTI (3b) PFI EXI (3b) type (2B) UPI (8b) tHEC (2B) extension header (0-60B) eHEC (2B)

110 GFP modes GFP-F - frame mapped GFP
Good for PDU-based protocols (Ethernet, IP, MPLS) or HDLC-based ones (PPP) Client PDU is placed in GFP payload field GFP-T – transparent GFP Good for protocols that exploit physical layer capabilities In particular 8B/10B line code used in fiber channel, GbE, FICON, ESCON, DVB, etc Were we to use GFP-F would lose control info, GFP-T is transparent to these codes Also, GFP-T needn’t wait for entire PDU to be received (adding delay!)


Download ppt "Yaakov (J) Stein Chief Scientist RAD Data Communications"

Similar presentations


Ads by Google