1 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS (Metro Architectures enablINg Subwavelengths) Georgios Zervas.

Slides:



Advertisements
Similar presentations
The Medium Access Control Sublayer
Advertisements

Software Version: DSS ver up01
1 UNIT I (Contd..) High-Speed LANs. 2 Introduction Fast Ethernet and Gigabit Ethernet Fast Ethernet and Gigabit Ethernet Fibre Channel Fibre Channel High-speed.
Process Description and Control
Virtual Trunk Protocol
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2003 Chapter 11 Ethernet Evolution: Fast and Gigabit Ethernet.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 4 Computing Platforms.
Sequential Logic Design
Processes and Operating Systems
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 1 Embedded Computing.
Author: Julia Richards and R. Scott Hawley
1 Building a Fast, Virtualized Data Plane with Programmable Hardware Bilal Anwer Nick Feamster.
Doc.: IEEE /037r1 Submission March 2001 Khaled Turki et. al,Texas InstrumentsSlide 1 Simulation Results for p-DCF, v-DCF and Legacy DCF Khaled.
1 IEEE Media Independent Handoff Overview of services and scenarios for 3GPP2 Stefano M. Faccin Liaison officer to 3GPP2.
1 Hyades Command Routing Message flow and data translation.
1 Introducing the Specifications of the Metro Ethernet Forum.
1 Introducing the Specifications of the Metro Ethernet Forum MEF 19 Abstract Test Suite for UNI Type 1 February 2008.
18 Copyright © 2005, Oracle. All rights reserved. Distributing Modular Applications: Introduction to Web Services.
1 RA I Sub-Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Casablanca, Morocco, 20 – 22 December 2005 Status of observing programmes in RA I.
CALENDAR.
Chapter 5 Input/Output 5.1 Principles of I/O hardware
Communicating over the Network
Database Systems: Design, Implementation, and Management
Protocol layers and Wireshark Rahul Hiran TDTS11:Computer Networks and Internet Protocols 1 Note: T he slides are adapted and modified based on slides.
1 Chapter One Introduction to Computer Networks and Data Communications.
Bus arbitration Processor and DMA controllers both need to initiate data transfers on the bus and access main memory. The device that is allowed to initiate.
Processor Data Path and Control Diana Palsetia UPenn
Electric Bus Management System
Figure 12–1 Basic computer block diagram.
Fast Crash Recovery in RAMCloud
Chapter 1: Introduction to Scaling Networks
Local Area Networks - Internetworking
1 Chapter Overview Network Cables Network Interface Adapters Network Hubs.
TCP/IP Protocol Suite 1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 2 The OSI Model and the TCP/IP.
EIS Bridge Tool and Staging Tables September 1, 2009 Instructor: Way Poteat Slide: 1.
Chapter 3 Logic Gates.
Mohamed ABDELFATTAH Vaughn BETZ. 2 Why NoCs on FPGAs? Embedded NoCs Power Analysis
Chapter 10: Virtual Memory
Mobile IP: Multicast Service Reference: Multicast routing protocol in mobile networks; Hee- Sook Shin; Young-Joo Suh;, Proc. IEEE International Conference.
IP Multicast Information management 2 Groep T Leuven – Information department 2/14 Agenda •Why IP Multicast ? •Multicast fundamentals •Intradomain.
Operating Systems Operating Systems - Winter 2011 Dr. Melanie Rieback Design and Implementation.
Operating Systems Operating Systems - Winter 2012 Dr. Melanie Rieback Design and Implementation.
Operating Systems Operating Systems - Winter 2010 Chapter 3 – Input/Output Vrije Universiteit Amsterdam.
Chapter 20 Network Layer: Internet Protocol
1 Network Address Translation (NAT) Relates to Lab 7. Module about private networks and NAT.
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
1..
CONTROL VISION Set-up. Step 1 Step 2 Step 3 Step 5 Step 4.
© 2012 National Heart Foundation of Australia. Slide 2.
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 1 v3.1 Module 10 Routing Fundamentals and Subnets.
Executional Architecture
Chapter 9: Subnetting IP Networks
Analyzing Genes and Genomes
Figure 10–1 A 64-cell memory array organized in three different ways.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public 1 Addressing the Network – IPv4 Network Fundamentals – Chapter 6.
Essential Cell Biology
Connecting LANs, Backbone Networks, and Virtual LANs
Intracellular Compartments and Transport
1 © 2004, Cisco Systems, Inc. All rights reserved. CCNA 1 v3.1 Module 9 TCP/IP Protocol Suite and IP Addressing.
PSSA Preparation.
Essential Cell Biology
Energy Generation in Mitochondria and Chlorplasts
Introduction Peter Dolog dolog [at] cs [dot] aau [dot] dk Intelligent Web and Information Systems September 9, 2010.
TCP/IP Protocol Suite 1 Chapter 18 Upon completion you will be able to: Remote Login: Telnet Understand how TELNET works Understand the role of NVT in.
© 2010 MAINS Consortium MAINS (Metro Architectures enablINg Subwavelengths) Mark Basham(WPL, INT) George Zervas(UESSEX) MAINS 2 nd EC Technical Review.
© 2010 MAINS Consortium MAINS (Metro Architectures enablINg Subwavelengths) Javier Aracil (WPL, UAM) Giacomo Bernini (NXW) MAINS 2 nd EC Technical Review.
1 25\10\2010 Unit-V Connecting LANs Unit – 5 Connecting DevicesConnecting Devices Backbone NetworksBackbone Networks Virtual LANsVirtual LANs.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
Presentation transcript:

1 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS (Metro Architectures enablINg Subwavelengths) Georgios Zervas (WPL, UEssex) MAINS 1 st EC Technical Review Brussels, March 24 th 2011 WP4 Experimental validation of the MAINS concept

2 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Contents  Brief WP4 Y2 summary – Objectives – activities and – results  Technical insight on Y2 results – TSON Node prototype – SLAE prototype (to be completed in Y3) – Virtual-PC over OPST Field Primetel network – Year 3 plans

3 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Main WP4 Year 2 objectives and plans for Year 3 [Y1] Identifying and purchasing the main required devices to support TSON. [Y1] Map TSON architecture (T2.2) to TSON implementation rules. [Y2]Design and implementation of TSON Node prototype ✓ [Y2]Field Trial of Virtual PC services over OPST on Primetel’s network. ✓ [Y2-Y3]Implementation of SLAE tool for sub-lambda (time) resource allocation. ✓ [Y3]TSON Network Test-bed. On-going  [Y3] TSON-OPST Transport/Data Plane Interworking. On-going  [Y3] TSON-OPST Interworking with extended GMPLS control plane. To start M28

4 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Activities breakdown Y1/Y2 Y3 Y2

5 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 WP4 Year2 Gantt Chart Y2 M23 Implementation of TSON Nodes M23 Field Trial of Virtual PC over OPST M25 Implementation of sub- lambda assignment element M28 OPST-TSON Interconnection M33 GMPLS controlled OPST-TSON Testbed

6 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Year 2 work summary: Objectives, activities and results  Objectives and activities carried out in T4.1 (M12 –M25) – Developing TSON Metro Node prototype for Edge and Bypass services Layer 2 prototype system to deliver TSON services (Eth. parsing, allocation, aggregation, etc.) Layer 1 prototype system to switch TSON data sets Dynamic and programmable Layer 2 TSON reconfiguration – Sub-Lambda Allocation Engine (SLAE) tool ending in Year 3 (M25)  Activities carried out in T4.2 (M21- M23): – Field Trial demonstrating Virtual PC Service over OPST ring on Primetel’s network  Results – TSON Node (L2 and L1) prototype (T4.1, D4.1) – Field Trial of Virtual PC over OPST successfully performed (T4.2, D4.2) – SLAE prototype tool (T4.1, D4.5)

7 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Deliverables DELIV.DESCRIPTIONDEADLINE D4.1Implementation of OPST Metro Ring- OBST Metro Mesh interconnection Node and Mesh bypass Metro Node M23 (Year 2) D4.2Field trial of OPST ring demonstrating pc virtualization services M23 (Year 2) D4.5Implementation of sub-lambda assignment element M25 (Year 3) D4.3Integration of controlled OBST Metro Mesh interconnected with OPST Metro Ring network M28 (Year 3) D4.4Field trial of GMPLS controlled Ring-Mesh interconnected network M33 (Year 3)

8 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.1 insight] Time-Shared Optical Network (TSON) Node Prototype  Software Platform to support : –Virtual PC application –Sub-wavelength enabled GMPLS Stack –Sub-lambda PCE for RWTA –Node interfaces  Layer 2 FPGA-based platform to support –1x10GE and 2xTSON transceivers –Independent hardware sharing, and emulation. –Flexible time-slice aggregation, scheduling and optical data formatting – Software-Hardware defIned Network (SHINE) Hitless reconfiguration from Packet-based (e.g. Ethernet) to TSON transport  Layer 1 optical nodes based on: –Only fast switches (PLZTs) for TSON support –Architecture on Demand (AoD) for flexible time and frequency allocation.

9 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary HW moduleFeaturesLang.MAINS Top Level ≈5K code lines Component instantiation, Glue logic VHDL/ (≈4200code lines) MDIO Controller (Provides access to the configuration and status registers of PHY.) VHDL (≈300 code lines) I2C Controller (program I2C programmable XO/VCXO SI570 and synthesizer SI5368.) VHDL (≈200 code lines) Signal/pulse synchronization (Signal or pulse synchronization between different clock domains) VHDL (≈300 code lines) For TSON Metro node with one ingress/egress node, totally 30.2K code lines. For TSON Metro node with two ingress/egress node, totally 48.6K code lines.

10 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary (contd.) HW moduleFeaturesLang.MAINS Ingress node ≈11.2K code lines GTH transceivers (IP core) VHDL/ ngc (≈3200code lines) 10G Ethernet MAC (IP core) VHDL/ ngc (≈1200code lines) Receiver Buffer (Receive Ethernet frames, keep good ones and drop bad ones ) VHDL (≈800 code lines) Distribution block (Demux input Ethernet frames, put them to different FIFOs) VHDL (≈1400 code lines) Aggregation (Aggregate from Ethernet frame to TSON burst) VHDL (≈2800 code lines) Transmitter Buffer (Send burst out based on the Time-slice Allocation, Synchronization from RX clock to TX clock) VHDL (≈1800 code lines)

11 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary (contd.) HW moduleFeaturesLang.MAINS Eggress Node ≈7.2K code lines GTH transceivers (IP core) VHDL/ ngc (≈3200code lines) 10G Ethernet MAC (IP core) VHDL/ ngc (≈1200code lines) Receiver Buffer (Receive TSON data sets ) VHDL (≈1500 code lines) Seggregation (seggregate from TSON burst to Ethernet Frames) VHDL (≈600 code lines) Transmitter Buffer (Send Ethernet frame out when seggregation done, Synchronization from RX clock to TX clock) VHDL (≈700 code lines)

12 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary (contd.) HW moduleFeaturesLang.MAINS GMPLS communicati on ≈6.4K code lines GTH transceivers (IP core) VHDL/ ngc (≈3200code lines) 10G Ethernet MAC (IP core) VHDL/ ngc (≈1200code lines) Receiver Buffer (Receive Ethernet frames, keep good ones and drop bad ones ) VHDL (≈800 code lines) LUT/LUT_Update (Update the Time-slice allocation and PLZT switch Look-Up-Table ) VHDL (≈500 code lines) Transmitter Buffer (Send Ethernet frame out to the server as a LUT update feedback, Synchronization from RX clock to TX clock) VHDL (≈700 code lines) PLZT Switch Control ≈0.4K code lines Control PLZT Switch based on the PLZT switch LUT VHDL (≈400 code lines)

13 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.5 insight] Sub-Lambda Allocation Engine Tool  Request from GMPLS CP: SLAE tool is being invoked by GMPLS: – Network topology matrix – Number of wavelengths per link – Number of time slices per each wavelength – Source node – Destination node – Bit-rate request – The number of path for KSP  Response back to GMPLS CP: – The assigned path – Assignment matrix (wavelength & time-slices)

14 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS SLAE developments summary HW moduleFeaturesLang.MAINS Sub Lambda Assignment Engine ≈5K code lines java main function java (≈1200code lines) Wavelength and time slice allocation Algorithm java (≈600code lines) Network information abstraction java (≈1400 code lines) Routing data types and algorithms java (≈500 code lines) Utilities functions java (≈900 code lines) Outputs and statics java (≈400 code lines)

15 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON Node/Network Testbed Servers (2x10GE NICs each) for network control & Virtual PC Services FPGA for L2 operation PLZT(4x4s, 2x2s) switches EDFAs 2 FPGA L2 platforms FPGA SFP+ (80Km WDM) TX/RX Back view: FPGA/ (DE)MUX

16 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.2 insight] Field trial of OPST ring demonstrating pc virtualization services  Field Trial and Setup – Setup Planned – Topology – Node specifications – Configuration  Deployment and Operational Aspects – OPST deployment – Service Integration – Network Interfaces  Performance Evaluation and Quality of Experience – Use case scenarios – Scalability tests

17 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Primetel Setup Planned

18 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial Configuration Specs  2 Netgear Switches: 48 1Gbps ports GE SFP uplink ports.  1 Sun Fire X2270 Server – 4-core Xeon – 2-port 1 GE NIC (igb) – 6 GB RAM – 1TB RAID1 Array  1 Sun Fire X 4150 – 8-core Xeon – 4-port Gigabit NIC (e1000e) – 8 GB RAM – 1TB RAID5 Array  3 InTune Beta NXT Nodes – Node 1 1x10G Client Port Drop Channel 16, MAC Address 00:50:C2:80:10:2C – Node 2 1x10G Client Port Drop Channel 10, MAC Address 00:50:C2:80:10:2F – Node3 1x10G Client Port Drop Channel 19, MAC Address 00:50:C2:80:10:2B – Line Physical Interface – Fast Tuning Laser, 2 xLC/APC – Client Physical Interface – 10GBase-R (IEEE 802.3ae) 1 xXFP  1 Intune Networks Beta Node Photonic Manager  1 Intune Networks Beta Node Connection Manager

19 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Actual Topology Setup  LINK TO VIDEO COULD GO HERE

20 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Intune Configuration  Two virtualization servers are connected to Netgear switches, which in turn connect to Intune Beta nodes 1 and 3 in the symmetrical ring of 5 Km. span per segment.  The ring is formed by three Intune Beta nodes and one of them serves as a pass- through (node 2 in the picture).  The Netgear switches provide interface between the Server 1G ports and the 10G client interfaces on the InTune Beta nodes.

21 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial OPST Deploymentat Primetel Nicosia HQ Figure 1: Two OPST HQ Figure 3: 10Gbit XFPs for Optical Switch to OPST nodes Figure 2: Switches for Management over LAN Figure 4: Research Servers 2 & 3

22 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Intune Management System  A PC was also connected to the LAN to support configuration and management allowing for the following: – When the three-node ring were powered up, the power levels had to be calibrated optically. – Bit error test, where PRBS test data is generated before action data traffic is allowed on the ring. – MAC addresses configuration. – Master Node Selection and Service Mode Initialization. – Virtual Connections Setup.

23 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Applications Setup  A number of Virtual Machines were created on a single Laptop connected to the prototype network with real user interfaces.  With the Virtual Machine Application we were in position to create a number of Virtual Machines, shut them down, modify them and if necessary transfer them where necessary from one server to another.  Figure 1 (left) shows the login at MAINS Virtual PC Homepage  Figure 2 (right) shows the user friendliness of the application where on is able to indicate on a map the selected destination and hence server location. Server selection can be achieved through this interface. The orange smiley icon represents us, blue server is the selected server, and red servers are available servers

24 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Use Case Scenarios  Quality of Experience of Scenario 1: In this use case scenario, the user is accessing his virtual machine from local server on the same network and hence experiences optimum performance.  Quality of Experience for Scenario 2: In this scenario the user is accessing his virtual machine from remote server located approximately 11km away. The quality level experienced by the user was not much different to the first scenario with very minor jitter experienced due to the transport layer protocol.  Quality of Experience for Scenario 3: In this scenario, we tested the transfer of a virtual machine from a remote to a local server, while a mobile user changed its point of access from one location to another. The handover time experienced by the user on the application level was mainly due to the copying time of the virtual machine caused by the transport layer protocol.

25 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Year 3 activities: TSON Network Testbed  TSON Network Testbed [ongoing work] – Connect multiple TSON Edge and Bypass Nodes together ()  OPST-TSON Interworking [ongoing work] – OPST nodes to be hosted at Uessex and trial end-to-end OPST-TSON network (  Vertical Integration of OPST and TSON with GMPLS [ongoing work] – Software integration among GMPLS-PCE-XML – Control-Transport Plane integration  Workshop/Demo at ECOC 2013 – Control Plane workshop organized by Juan Pedro Fernandez-Palacios – Remote demos Control plane

26 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Thank you

27 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Backup slides

28 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Simulation Scenario  Telefonica’s Madrid metro reference network – 15 nodes, 23 bidirectional links,16 10 Gb/s – Wavelength conversion, optical buffering and time-slice interchange are not deployed in the network.  Traffic characteristics – requests follow a Poisson process and exponential holding time with mean 1/μ=60 s. – Service connection of 1 Gbps has been employed. Two Time-Slice allocation assignments with fixed Tx or Tunable Tx –First-Fit Contiguous (FFC) –Multi-Wavelength First-Fit (MWFF or TETRIS).

29 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Results: Connection blocking & Utilization Add port utilization Offered load (Erlang)  Connection blocking probability as a function of the offered load to the network. – FFC delivers similar results with applying 12 tunable transceivers or 16 fixed transceivers 25% port-count reduction when tunable transceivers are used. – MWFF delivers increased offered load (e.g. 60% 10E-3) for an ) due to the greater time-slice allocation flexibility. MWFF can benefit higher number of fixed ports than transceiver tunability.  The add port utilization under maximum offered load reaches: – 68% for FFC with fixed and 63% with tunable transceivers, – 82% for MWFF with fixed and 72% with tunable transceivers. – MMFF 32% improved add-port utilization compared to FFC 60%

30 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Sub-wavelength Connection Blocking vs. Network Load 55% Network 10E-3 CBP with MWFF 49% increased Network utilization

31 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Results: Implementation Complexity  For the FFC case, the number of fragments per connection remains one.  In contrast, MWFF uses more than 8 fragments for all the levels of offered load considered.  MWFF uses more wavelengths (up to 6) as the offered load increases to provide the required number of time-slices.

32 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 OPST-TSON interworking:

33 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON L1/L2 Metro Node (both bypass and interconnection) AWGAWG AWGAWG Optical Node TSON Ingress TSON Egress TSON Ingress TSON Egress

34 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON Transit Node to WSON and FSON (Frequency…) – Address Reviewers Comments and support Transit to Core

35 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Current FPGA Design and Implementation Current FPGA design realised 2 TSON Ingress nodes and 2 TSON Egress nodes, The LUT is updated by server through 10G Ethernet interface.

36 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Addressing inter-burst optical power imbalance and Burst Mode Reception - Year 1 Reviewers comment!  Burst Mode Operation – No Data Between Time-Sliced Data Sets (Bursts) – Possible issues with power imbalance – Need for Fast Transient response EDFAs – Need for Burst Mode Receiver (BMR)  Proposing a TSON Ethernet-based transport – Keep-Alive Pattern (K-Characters) between Time-Sliced Data Sets (Bursts) K-Characters prior to Burst from same clock source – Minimizing Power Balance Issue – No need for Fast-Transient Response EDFA and no need for BMR

37 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Current FPGA Design XAUI: 10 Gigabit Transceivers Data Width: 64 bits Frequency: MHz K Characters: For Synchronization and Clock Recovery Ethernet Data: Suppose maximum Ethernet frame length For one Time-Slice (10 us): – Total traffic =T*F= 1562*64 = bits – Actual traffic = 7*191/1562=85.5%

38 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Year 2 Deliverables : Results  The practical/theoretical TSON utilization for 1500B packets is 95.38%.  The TSON/Ethernet utilization for 1500B packets is 87.96%.  Ingress/Egress processing latency doesn’t exceed 160 μs for 1500B Etherent data.  Ingress/Egress processing latency doesn’t exceed 160 μs. G. Zervas, et.al., FUNMS 2012

39 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 SHINE Use case: Hitless Switch-Over Ethernet  TSON TSON allows for flexible data aggregation that triggers a latency overhead but yet remains very low with FPGA-based solution < 150 μsec (over 2 switches) Ultra-low latency Ethernet (<7μsec) able to support delay stringent applications Y. Qin, et.al., ONDM 2012

40 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Layer 2 Measured Results: Overhead For packet with 64B, on speed 7G, it hasn’t been measured because of a software bug. The result of 1500B

41 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Contiguous ‘0’  Limited capacity of FPGA on-chip RAM  Current TSON FPGA design uses 131K RAM as rx_fifo buffer and 524K RAM as aggregation buffer.  Able to hold maximum 6.5 time-slice data.  If there large number of contiguous unallocated time-slices, the buffer in FPGA might overflow and it will cause Ethernet frame loss.

42 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Latency (Packet Size 64B)  The time-slice is allocated based on different bit rate. In both 64B and 1500B Latency results,  Time-slice Allocation1 makes all the time-slices available;  Time-slice Allocation2 spreads the time-slices equally into the frame;  Time-slice Allocation3 gathers the most possible numbers of time-slices together.  Packet Size 64B, Ethernet Bitrate 1Gbps (91 Time-slices) :  Time-Slice1:  Time-Slice2:  Time-Slice3: Time-slice allocation affects QoS (end-to-end latency) in the range of 13.5% 8.5% - Allocation 2 22% - Allocation 3 (1.6msec prop delay over “average no of hops” = 2.5 hops )

43 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Latency (Packet Size 1500B)  In case for Packet Size 1500B, Ethernet Bitrate 1Gbps (91 Time-slices):  Time-Slice1:   Time-Slice2:   Time-Slice3(Pattern based on the minimum ‘1’s and maximum ‘0’s ): 

44 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Latency (Different Wavelength)  When allocate time-slice on different wavelength, for example, Packet size 1500B, 1G.  Wavelength1 25%: W1: W2:

45 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Jitter (Packet Size 64B) The figures show the percentage of packets that are not received in 1 us % of packets are received below 1us.

46 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Jitter (Packet Size 1500B) The figures show the percentage of packets that are not received in 2 us. 87.5% of packets are received below 2us.

47 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Experiment and preliminary results SHINE Node 0 (TSON Node 0 or 10GE Node 0) Data Quality Analyser MD1230B SHINE Controller Fig. 6: Dynamic service swap

48 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 SHINE Hitless Switch-Over from Ethernet to TSON and vice versa Hitless switch-over from TSON to Ethernet at point A, then back to TSON at point B: (a) Measured bit rate; (b) Measured bit rate changes; (c) Number of transmitted and received frames (hitless).

49 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Resource Utilization for SHINE Deployment

50 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON Testbed

51 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Optical Backplane configuration OPTICAL BACKPLANE (3D MEMS) 4x4 PLZT 2x2 PLZT (De)MUX Coupler Traffic Server FPGA node AMP 4x4 PLZT 2x2 PLZT (De)MUX Coupler Traffic Server FPGA node AMP Wave shaper

52 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology 1  1 Lambda  1 way  Partial mesh  3 add/drop nodes

53 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology  2 Lambda  Hybrid one/two ways  Partial mesh  1 add/drop, 1 add, and 1 drop nodes

54 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology 3  2 Lambda  2 ways  Star topology  4 add/drop nodes

55 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology 4  2 Lambda  1 way  Ring formation  4 add-drop nodes PLZ T SWITCH PLZ T SWITCH PLZ T SWITCH PLZ T SWITCH

56 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 RWTA results

57 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012

58 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 SLAE Tool : Routing and Spectrum Allocation

59 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 RSA results

60 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.1 insight] Year 2 Deliverables : D4.1  Deliverables during Year 2 – D4.1“Implementation of OPST Metro Ring- OBST Metro Mesh interconnection Node and Mesh bypass Metro Node (M23). The overall node design and architecture The Layer 2 sub-set of the node, Also the Layer 1 sub-set of the node (optical switching node) Finally the implemented platform to flexibly re-program, re-configure the TSON FPGA hardware prototype platform. Results have been provided to demonstrate the performance in terms of throughput, time-slice overhead, latency, jitter.