Presentation is loading. Please wait.

Presentation is loading. Please wait.

1.  Team Members  Team Leader: Adam Jackson  Communication Coordinator: Nick Ryan  Bader Al-Sabah  David Feely  Richard Jones  Faculty Advisor.

Similar presentations


Presentation on theme: "1.  Team Members  Team Leader: Adam Jackson  Communication Coordinator: Nick Ryan  Bader Al-Sabah  David Feely  Richard Jones  Faculty Advisor."— Presentation transcript:

1 1

2  Team Members  Team Leader: Adam Jackson  Communication Coordinator: Nick Ryan  Bader Al-Sabah  David Feely  Richard Jones  Faculty Advisor Dr. Ahmed Kamal  Client Contacts Aaron Cordes Rick Stevens Client: Lockheed Martin 2

3  At this time, the maximum real-world throughput of 10 Gbps network configurations is unknown. 3

4  Lockheed Martin (LM) needs a test plan designed and executed to measure the maximum real-world throughput of a 10 Gbps network composed of Commercial Off the Shelf (COTS) components. 4

5  Create and test a network capable of reaching 10 Gbps with COTS components  Topology has to use fiber optics  Remain within approx. $3500 budget 5

6  PCI Express (PCI-E) Network Cards with an XFP Switch  PCI Extended (PCI-X) Network Cards with an XFP Switch  Advanced TCA or MicroTCA (µTCA) Architectures 6

7 7 Graphic inspired by previous HSOI team  Testing will be completed with two systems directly connected  Used for testing bandwidth and bandwidth efficiency

8 8  Composed of three nodes and a Ethernet switch  Used for testing switching time, latency, and quality of service Graphic inspired by previous HSOI team

9  Same node strategy as PCI-E  Bus speed max of approximately 8 Gbps  Client requirement of 10 Gbps makes this an unfeasible solution 9

10  Testing should be done with a single node due to the high cost of components  Single Node composed of the following  Three 10 Gbps Network Interface Cards  µTCA Carrier Hub  Power module  Control Processor  Switching Fabric  Nodes can be connected in various ways 10

11 11 Diagram courtesy of LMCO

12 µTCAPCI-E  Advantages  Modular design allows for expansion  262.5 Gbps maximum throughput for Advanced Mezzanine Cards (AMC)  Disadvantages  AMC Network Interface Cards at 10 Gbps are not readily available  Costly components  Advantages  Readily available optical 10 Gbps NICs  Variety of 10 Gbps XFP Switches  Relatively low cost components  Disadvantages  Lack of PCI-E systems at ISU Source: http://www.compactpci-systems.com/columns/Tutorial/pdfs/4.2005.pdf 12

13 Source: http://www.dell.com/content/topics/global.aspx/vectors/en/2004_pciexpress?c=us&l=en&s=corp InterfaceMaximum Transfer Rate PCI-X 100-MHz6.4 Gbps (800 MB/sec) PCI-X 133-MHz8 Gbps (1 GB/sec) PCI-E x14 Gbps (500 MB/sec) PCI-E x416 Gbps (2 GB/sec) PCI-E x832 Gbps (4 GB/sec) PCI-E x1664 Gbps (8 GB/sec) uTCA (AMC)262.5 Gbps (32.8 GB/sec) 13

14  µTCA will not fit into budget  µTCA components may not be available in time  PCI-X is not fast enough 14

15  Capable of PCI-E x1/x4/x8  Operating System Dual-Boot (Windows, Linux)  Approximate Cost: $500/system  Separate system needed for each node 15

16  Pluggable XFP optical interface  10GBASE-SR and –LR support  PCI-E Ver. 1.1 Interface  x1/x4/x8 compatible  32 Gbps throughput  Linux and Windows OS supported Source: NetXen website http://www.netxen.com/products/boardsolutions/NXB- 10GXxR.html 16

17  Model Number: SMC8708L2  Supports up to 8 XFP ports  Delivers 10-Gigabit Ethernet  Switching fabric – 160Gbps  AC Input – 100 to 240 V, 50 – 60 Hz, 2 A 17 http://www.pcworld.com/product/pricing/prtprdid,9311286-sortby,retailer/pricing.html

18  SMC10GXFP-SR  TigerAccess™ XFP 10G Transceiver  1-Port 10GBASE-SR (LC) XFP Transceiver  Used for 10 Gbps connections 18 http://ecx.images- amazon.com/images/I/11jHA98YEEL._AA160_.jpg

19 NXB-10GXxR Intelligent NIC® 10 Gigabit Ethernet PCIe Adapter with pluggable XFP optical interface (http://www.netxen.com/products/boardsolutions/NXB-10GXxR.html) TigerSwitch 10G 8-Port Standalone XFP 10Gigabit Ethernet Managed Layer 2 Switch SMC Networks, Inc. Node 1 Node 2 Node 3 (As recommended by Lockheed Martin.) 19

20 Purchased by Team Purchased by Client Provided by Department NICs2-30-1- Switch-1- XFP Transceiver (for use on switch) -2-3- Optical CablingAs needed-Available, details unknown Computer SystemsIf necessary and available in budget -Supplied to senior design lab 20

21 ResourceQuantityUnit CostTotal Cost Optical NICs2$1000$2000 Optical NICs1$1000On loan from client 1 XFP Switch1$6500On loan from client XFP Transceiver3$1880On loan from client Fiber optic cables3$80$240 Host System3$500$1500 2 Total$3740 1 One optical NIC will need to be borrowed if the team must purchase the host systems 2 ISU ECpE Department’s update of the Senior Design lab may cover this cost 21

22  Qcheck  Packet generation program  Can be used to test bandwidth, bandwidth efficiency, and latency  Ethereal  Packet capture program  Can be used for bandwidth efficiency testing  IP Traffic Test & Measure  Network testing suite  Can be used for quality of service and latency testing 22

23  Testing will be predominantly software based, the test bench will be executed on the computer system described previously.  If issues arise and the signal needs to be observed, an Agilent 86100A oscilloscope is available from the department.  Availability of splitters is unknown. 23

24  Bandwidth Measurement  Channel Capacity  Bandwidth Efficiency (Throughput)  Switching Time Measurement  Latency Measurement  Quality of Service Measurement 24

25  Execute each test multiple times to ensure precise results  Provide appropriate statistics from results  Use both UDP and TCP protocols when possible  Vary data size to avoid skewed results due to packet header overhead 25

26  Bandwidth  Compare link usage for each node under varying workload types  Bandwidth Efficiency  Show a comparison of the amount of OSI Layer 1 data sent for different OSI Layer 7 data block sizes  Switching Time  Compare switching time and link load for cases when 2 and 3 nodes are connected to the network 26

27  Latency  Compare the latency between nodes under different network loads  Quality of Service  Show the amount data received from each sending node for each endpoint node over time 27

28 ? 28


Download ppt "1.  Team Members  Team Leader: Adam Jackson  Communication Coordinator: Nick Ryan  Bader Al-Sabah  David Feely  Richard Jones  Faculty Advisor."

Similar presentations


Ads by Google