Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS (Metro Architectures enablINg Subwavelengths) Georgios Zervas.

Similar presentations


Presentation on theme: "1 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS (Metro Architectures enablINg Subwavelengths) Georgios Zervas."— Presentation transcript:

1 1 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS (Metro Architectures enablINg Subwavelengths) Georgios Zervas (WPL, UEssex) MAINS 1 st EC Technical Review Brussels, March 24 th 2011 WP4 Experimental validation of the MAINS concept

2 2 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Contents  Brief WP4 Y2 summary – Objectives – activities and – results  Technical insight on Y2 results – TSON Node prototype – SLAE prototype (to be completed in Y3) – Virtual-PC over OPST Field Trial @ Primetel network – Year 3 plans

3 3 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Main WP4 Year 2 objectives and plans for Year 3 [Y1] Identifying and purchasing the main required devices to support TSON. [Y1] Map TSON architecture (T2.2) to TSON implementation rules. [Y2]Design and implementation of TSON Node prototype ✓ [Y2]Field Trial of Virtual PC services over OPST on Primetel’s network. ✓ [Y2-Y3]Implementation of SLAE tool for sub-lambda (time) resource allocation. ✓ [Y3]TSON Network Test-bed. On-going  [Y3] TSON-OPST Transport/Data Plane Interworking. On-going  [Y3] TSON-OPST Interworking with extended GMPLS control plane. To start M28

4 4 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Activities breakdown Y1/Y2 Y3 Y2

5 5 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 WP4 Year2 Gantt Chart Y2 M23 Implementation of TSON Nodes M23 Field Trial of Virtual PC over OPST M25 Implementation of sub- lambda assignment element M28 OPST-TSON Interconnection M33 GMPLS controlled OPST-TSON Testbed

6 6 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Year 2 work summary: Objectives, activities and results  Objectives and activities carried out in T4.1 (M12 –M25) – Developing TSON Metro Node prototype for Edge and Bypass services Layer 2 prototype system to deliver TSON services (Eth. parsing, allocation, aggregation, etc.) Layer 1 prototype system to switch TSON data sets Dynamic and programmable Layer 2 TSON reconfiguration – Sub-Lambda Allocation Engine (SLAE) tool ending in Year 3 (M25)  Activities carried out in T4.2 (M21- M23): – Field Trial demonstrating Virtual PC Service over OPST ring on Primetel’s network  Results – TSON Node (L2 and L1) prototype (T4.1, D4.1) – Field Trial of Virtual PC over OPST successfully performed (T4.2, D4.2) – SLAE prototype tool (T4.1, D4.5)

7 7 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Deliverables DELIV.DESCRIPTIONDEADLINE D4.1Implementation of OPST Metro Ring- OBST Metro Mesh interconnection Node and Mesh bypass Metro Node M23 (Year 2) D4.2Field trial of OPST ring demonstrating pc virtualization services M23 (Year 2) D4.5Implementation of sub-lambda assignment element M25 (Year 3) D4.3Integration of controlled OBST Metro Mesh interconnected with OPST Metro Ring network M28 (Year 3) D4.4Field trial of GMPLS controlled Ring-Mesh interconnected network M33 (Year 3)

8 8 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.1 insight] Time-Shared Optical Network (TSON) Node Prototype  Software Platform to support : –Virtual PC application –Sub-wavelength enabled GMPLS Stack –Sub-lambda PCE for RWTA –Node interfaces  Layer 2 FPGA-based platform to support –1x10GE and 2xTSON transceivers –Independent hardware sharing, and emulation. –Flexible time-slice aggregation, scheduling and optical data formatting – Software-Hardware defIned Network (SHINE) Hitless reconfiguration from Packet-based (e.g. Ethernet) to TSON transport  Layer 1 optical nodes based on: –Only fast switches (PLZTs) for TSON support –Architecture on Demand (AoD) for flexible time and frequency allocation.

9 9 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary HW moduleFeaturesLang.MAINS Top Level ≈5K code lines Component instantiation, Glue logic VHDL/ (≈4200code lines) MDIO Controller (Provides access to the configuration and status registers of PHY.) VHDL (≈300 code lines) I2C Controller (program I2C programmable XO/VCXO SI570 and synthesizer SI5368.) VHDL (≈200 code lines) Signal/pulse synchronization (Signal or pulse synchronization between different clock domains) VHDL (≈300 code lines) For TSON Metro node with one ingress/egress node, totally 30.2K code lines. For TSON Metro node with two ingress/egress node, totally 48.6K code lines.

10 10 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary (contd.) HW moduleFeaturesLang.MAINS Ingress node ≈11.2K code lines GTH transceivers (IP core) VHDL/ ngc (≈3200code lines) 10G Ethernet MAC (IP core) VHDL/ ngc (≈1200code lines) Receiver Buffer (Receive Ethernet frames, keep good ones and drop bad ones ) VHDL (≈800 code lines) Distribution block (Demux input Ethernet frames, put them to different FIFOs) VHDL (≈1400 code lines) Aggregation (Aggregate from Ethernet frame to TSON burst) VHDL (≈2800 code lines) Transmitter Buffer (Send burst out based on the Time-slice Allocation, Synchronization from RX clock to TX clock) VHDL (≈1800 code lines)

11 11 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary (contd.) HW moduleFeaturesLang.MAINS Eggress Node ≈7.2K code lines GTH transceivers (IP core) VHDL/ ngc (≈3200code lines) 10G Ethernet MAC (IP core) VHDL/ ngc (≈1200code lines) Receiver Buffer (Receive TSON data sets ) VHDL (≈1500 code lines) Seggregation (seggregate from TSON burst to Ethernet Frames) VHDL (≈600 code lines) Transmitter Buffer (Send Ethernet frame out when seggregation done, Synchronization from RX clock to TX clock) VHDL (≈700 code lines)

12 12 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS TSON Node FPGA developments summary (contd.) HW moduleFeaturesLang.MAINS GMPLS communicati on ≈6.4K code lines GTH transceivers (IP core) VHDL/ ngc (≈3200code lines) 10G Ethernet MAC (IP core) VHDL/ ngc (≈1200code lines) Receiver Buffer (Receive Ethernet frames, keep good ones and drop bad ones ) VHDL (≈800 code lines) LUT/LUT_Update (Update the Time-slice allocation and PLZT switch Look-Up-Table ) VHDL (≈500 code lines) Transmitter Buffer (Send Ethernet frame out to the server as a LUT update feedback, Synchronization from RX clock to TX clock) VHDL (≈700 code lines) PLZT Switch Control ≈0.4K code lines Control PLZT Switch based on the PLZT switch LUT VHDL (≈400 code lines)

13 13 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.5 insight] Sub-Lambda Allocation Engine Tool  Request from GMPLS CP: SLAE tool is being invoked by GMPLS: – Network topology matrix – Number of wavelengths per link – Number of time slices per each wavelength – Source node – Destination node – Bit-rate request – The number of path for KSP  Response back to GMPLS CP: – The assigned path – Assignment matrix (wavelength & time-slices)

14 14 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS SLAE developments summary HW moduleFeaturesLang.MAINS Sub Lambda Assignment Engine ≈5K code lines java main function java (≈1200code lines) Wavelength and time slice allocation Algorithm java (≈600code lines) Network information abstraction java (≈1400 code lines) Routing data types and algorithms java (≈500 code lines) Utilities functions java (≈900 code lines) Outputs and statics java (≈400 code lines)

15 15 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON Node/Network Testbed Servers (2x10GE NICs each) for network control & Virtual PC Services FPGA for L2 operation PLZT(4x4s, 2x2s) switches EDFAs 2 FPGA L2 platforms FPGA SFP+ (80Km WDM) TX/RX Back view: FPGA/ (DE)MUX

16 16 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.2 insight] Field trial of OPST ring demonstrating pc virtualization services  Field Trial and Setup – Setup Planned – Topology – Node specifications – Configuration  Deployment and Operational Aspects – OPST deployment – Service Integration – Network Interfaces  Performance Evaluation and Quality of Experience – Use case scenarios – Scalability tests

17 17 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Primetel Setup Planned

18 18 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial Configuration Specs  2 Netgear Switches: 48 1Gbps ports - 2 10GE SFP uplink ports.  1 Sun Fire X2270 Server – 4-core Xeon E5504@2.0GHz – 2-port 1 GE NIC (igb) – 6 GB RAM – 1TB RAID1 Array  1 Sun Fire X 4150 – 8-core Xeon E5450@3.0GHz – 4-port Gigabit NIC (e1000e) – 8 GB RAM – 1TB RAID5 Array  3 InTune Beta NXT Nodes – Node 1 1x10G Client Port Drop Channel 16, MAC Address 00:50:C2:80:10:2C – Node 2 1x10G Client Port Drop Channel 10, MAC Address 00:50:C2:80:10:2F – Node3 1x10G Client Port Drop Channel 19, MAC Address 00:50:C2:80:10:2B – Line Physical Interface – Fast Tuning Laser, 2 xLC/APC – Client Physical Interface – 10GBase-R (IEEE 802.3ae) 1 xXFP  1 Intune Networks Beta Node Photonic Manager  1 Intune Networks Beta Node Connection Manager

19 19 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Actual Topology Setup  LINK TO VIDEO COULD GO HERE

20 20 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Intune Configuration  Two virtualization servers are connected to Netgear switches, which in turn connect to Intune Beta nodes 1 and 3 in the symmetrical ring of 5 Km. span per segment.  The ring is formed by three Intune Beta nodes and one of them serves as a pass- through (node 2 in the picture).  The Netgear switches provide interface between the Server 1G ports and the 10G client interfaces on the InTune Beta nodes.

21 21 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial OPST Deploymentat Primetel Nicosia HQ Figure 1: Two OPST nodes @ HQ Figure 3: 10Gbit XFPs for Optical Switch to OPST nodes Figure 2: Switches for Management over LAN Figure 4: Research Servers 2 & 3

22 22 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Intune Management System  A PC was also connected to the LAN to support configuration and management allowing for the following: – When the three-node ring were powered up, the power levels had to be calibrated optically. – Bit error test, where PRBS test data is generated before action data traffic is allowed on the ring. – MAC addresses configuration. – Master Node Selection and Service Mode Initialization. – Virtual Connections Setup.

23 23 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Applications Setup  A number of Virtual Machines were created on a single Laptop connected to the prototype network with real user interfaces.  With the Virtual Machine Application we were in position to create a number of Virtual Machines, shut them down, modify them and if necessary transfer them where necessary from one server to another.  Figure 1 (left) shows the login at MAINS Virtual PC Homepage  Figure 2 (right) shows the user friendliness of the application where on is able to indicate on a map the selected destination and hence server location. Server selection can be achieved through this interface. The orange smiley icon represents us, blue server is the selected server, and red servers are available servers

24 24 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Field Trial: Virtual PC over OPST Ring on Field Trial Use Case Scenarios  Quality of Experience of Scenario 1: In this use case scenario, the user is accessing his virtual machine from local server on the same network and hence experiences optimum performance.  Quality of Experience for Scenario 2: In this scenario the user is accessing his virtual machine from remote server located approximately 11km away. The quality level experienced by the user was not much different to the first scenario with very minor jitter experienced due to the transport layer protocol.  Quality of Experience for Scenario 3: In this scenario, we tested the transfer of a virtual machine from a remote to a local server, while a mobile user changed its point of access from one location to another. The handover time experienced by the user on the application level was mainly due to the copying time of the virtual machine caused by the transport layer protocol.

25 25 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Year 3 activities: TSON Network Testbed  TSON Network Testbed [ongoing work] – Connect multiple TSON Edge and Bypass Nodes together ()  OPST-TSON Interworking [ongoing work] – OPST nodes to be hosted at Uessex and trial end-to-end OPST-TSON network (  Vertical Integration of OPST and TSON with GMPLS [ongoing work] – Software integration among GMPLS-PCE-XML – Control-Transport Plane integration  Workshop/Demo at ECOC 2013 – Control Plane workshop organized by Juan Pedro Fernandez-Palacios – Remote demos Control plane

26 26 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Thank you

27 27 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Backup slides

28 28 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Simulation Scenario  Telefonica’s Madrid metro reference network – 15 nodes, 23 bidirectional links,16 wavelengths @ 10 Gb/s – Wavelength conversion, optical buffering and time-slice interchange are not deployed in the network.  Traffic characteristics – requests follow a Poisson process and exponential holding time with mean 1/μ=60 s. – Service connection of 1 Gbps has been employed. Two Time-Slice allocation assignments with fixed Tx or Tunable Tx –First-Fit Contiguous (FFC) –Multi-Wavelength First-Fit (MWFF or TETRIS).

29 29 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Results: Connection blocking & Utilization Add port utilization Offered load (Erlang)  Connection blocking probability as a function of the offered load to the network. – FFC delivers similar results with applying 12 tunable transceivers or 16 fixed transceivers 25% port-count reduction when tunable transceivers are used. – MWFF delivers increased offered load (e.g. 60% higher @ 10E-3) for an ) due to the greater time-slice allocation flexibility. MWFF can benefit higher number of fixed ports than transceiver tunability.  The add port utilization under maximum offered load reaches: – 68% for FFC with fixed and 63% with tunable transceivers, – 82% for MWFF with fixed and 72% with tunable transceivers. – MMFF 32% improved add-port utilization compared to FFC 60%

30 30 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Sub-wavelength Connection Blocking vs. Network Load 55% Network utilization @ 10E-3 CBP with MWFF 49% increased Network utilization

31 31 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Results: Implementation Complexity  For the FFC case, the number of fragments per connection remains one.  In contrast, MWFF uses more than 8 fragments for all the levels of offered load considered.  MWFF uses more wavelengths (up to 6) as the offered load increases to provide the required number of time-slices.

32 32 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 OPST-TSON interworking:

33 33 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON L1/L2 Metro Node (both bypass and interconnection) AWGAWG AWGAWG Optical Node TSON Ingress TSON Egress TSON Ingress TSON Egress

34 34 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON Transit Node to WSON and FSON (Frequency…) – Address Reviewers Comments and support Transit to Core

35 35 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Current FPGA Design and Implementation Current FPGA design realised 2 TSON Ingress nodes and 2 TSON Egress nodes, The LUT is updated by server through 10G Ethernet interface.

36 36 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Addressing inter-burst optical power imbalance and Burst Mode Reception - Year 1 Reviewers comment!  Burst Mode Operation – No Data Between Time-Sliced Data Sets (Bursts) – Possible issues with power imbalance – Need for Fast Transient response EDFAs – Need for Burst Mode Receiver (BMR)  Proposing a TSON Ethernet-based transport – Keep-Alive Pattern (K-Characters) between Time-Sliced Data Sets (Bursts) K-Characters prior to Burst from same clock source – Minimizing Power Balance Issue – No need for Fast-Transient Response EDFA and no need for BMR

37 37 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Current FPGA Design XAUI: 10 Gigabit Transceivers Data Width: 64 bits Frequency: 156.25MHz K Characters: For Synchronization and Clock Recovery Ethernet Data: Suppose maximum Ethernet frame length For one Time-Slice (10 us): – Total traffic =T*F= 1562*64 = 99968 bits – Actual traffic = 7*191/1562=85.5%

38 38 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Year 2 Deliverables : Results  The practical/theoretical TSON utilization for 1500B packets is 95.38%.  The TSON/Ethernet utilization for 1500B packets is 87.96%.  Ingress/Egress processing latency doesn’t exceed 160 μs for 1500B Etherent data.  Ingress/Egress processing latency doesn’t exceed 160 μs. G. Zervas, et.al., FUNMS 2012

39 39 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 SHINE Use case: Hitless Switch-Over Ethernet  TSON TSON allows for flexible data aggregation that triggers a latency overhead but yet remains very low with FPGA-based solution < 150 μsec (over 2 switches) Ultra-low latency Ethernet (<7μsec) able to support delay stringent applications Y. Qin, et.al., ONDM 2012

40 40 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Layer 2 Measured Results: Overhead For packet with 64B, on speed 7G, it hasn’t been measured because of a software bug. The result of 1500B

41 41 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Contiguous ‘0’  Limited capacity of FPGA on-chip RAM  Current TSON FPGA design uses 131K RAM as rx_fifo buffer and 524K RAM as aggregation buffer.  Able to hold maximum 6.5 time-slice data.  If there large number of contiguous unallocated time-slices, the buffer in FPGA might overflow and it will cause Ethernet frame loss.

42 42 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Latency (Packet Size 64B)  The time-slice is allocated based on different bit rate. In both 64B and 1500B Latency results,  Time-slice Allocation1 makes all the time-slices available;  Time-slice Allocation2 spreads the time-slices equally into the frame;  Time-slice Allocation3 gathers the most possible numbers of time-slices together.  Packet Size 64B, Ethernet Bitrate 1Gbps (91 Time-slices) :  Time-Slice1: 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111  Time-Slice2: 0000010000010000010000010000010000010000010000010000010000010000000100000010000001000001000  Time-Slice3: 0000000000000000000000000000000000011111110000000000000000000000000000000000000001111111000 Time-slice allocation affects QoS (end-to-end latency) in the range of 13.5% 8.5% - Allocation 2 22% - Allocation 3 (1.6msec prop delay over “average no of hops” = 2.5 hops )

43 43 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Latency (Packet Size 1500B)  In case for Packet Size 1500B, Ethernet Bitrate 1Gbps (91 Time-slices):  Time-Slice1:  1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111  Time-Slice2:  0000001000000100000010000001000000100000010000001000000100000000100000001000000100000001000  Time-Slice3(Pattern based on the minimum ‘1’s and maximum ‘0’s ):  0000000000000000000000000111100000000000000000000000001111000000000000000000000000011110000

44 44 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Latency (Different Wavelength)  When allocate time-slice on different wavelength, for example, Packet size 1500B, 1G.  Wavelength1 25%: W1: 000000100000000000001000000100000010000000000000100000010000000010000000100000000000000100 W2: 000000000000010000000000000000000000000001000000000000000000000000000000000000010000000000

45 45 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Jitter (Packet Size 64B) The figures show the percentage of packets that are not received in 1 us. 99.35% of packets are received below 1us.

46 46 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Measured Results: Jitter (Packet Size 1500B) The figures show the percentage of packets that are not received in 2 us. 87.5% of packets are received below 2us.

47 47 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Experiment and preliminary results SHINE Node 0 (TSON Node 0 or 10GE Node 0) Data Quality Analyser MD1230B SHINE Controller Fig. 6: Dynamic service swap

48 48 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 SHINE Hitless Switch-Over from Ethernet to TSON and vice versa Hitless switch-over from TSON to Ethernet at point A, then back to TSON at point B: (a) Measured bit rate; (b) Measured bit rate changes; (c) Number of transmitted and received frames (hitless).

49 49 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 FPGA Resource Utilization for SHINE Deployment

50 50 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 TSON Testbed

51 51 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Optical Backplane configuration OPTICAL BACKPLANE (3D MEMS) 4x4 PLZT 2x2 PLZT (De)MUX Coupler Traffic Server FPGA node AMP 4x4 PLZT 2x2 PLZT (De)MUX Coupler Traffic Server FPGA node AMP 242016 44 Wave shaper

52 52 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology 1  1 Lambda  1 way  Partial mesh  3 add/drop nodes

53 53 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology  2 Lambda  Hybrid one/two ways  Partial mesh  1 add/drop, 1 add, and 1 drop nodes

54 54 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology 3  2 Lambda  2 ways  Star topology  4 add/drop nodes

55 55 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 Scenario/Topology 4  2 Lambda  1 way  Ring formation  4 add-drop nodes PLZ T SWITCH PLZ T SWITCH PLZ T SWITCH PLZ T SWITCH

56 56 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 RWTA results

57 57 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012

58 58 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 SLAE Tool : Routing and Spectrum Allocation

59 59 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 RSA results

60 60 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 [D4.1 insight] Year 2 Deliverables : D4.1  Deliverables during Year 2 – D4.1“Implementation of OPST Metro Ring- OBST Metro Mesh interconnection Node and Mesh bypass Metro Node (M23). The overall node design and architecture The Layer 2 sub-set of the node, Also the Layer 1 sub-set of the node (optical switching node) Finally the implemented platform to flexibly re-program, re-configure the TSON FPGA hardware prototype platform. Results have been provided to demonstrate the performance in terms of throughput, time-slice overhead, latency, jitter.


Download ppt "1 © 2010 MAINS Consortium MAINS 2 nd EC Technical Review, Brussels, March 29 th 2012 MAINS (Metro Architectures enablINg Subwavelengths) Georgios Zervas."

Similar presentations


Ads by Google