Presentation is loading. Please wait.

Presentation is loading. Please wait.

PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Some Performance Measurements Gigabit Ethernet NICs & Server Quality Motherboards Richard Hughes-Jones.

Similar presentations


Presentation on theme: "PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Some Performance Measurements Gigabit Ethernet NICs & Server Quality Motherboards Richard Hughes-Jones."— Presentation transcript:

1 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Some Performance Measurements Gigabit Ethernet NICs & Server Quality Motherboards Richard Hughes-Jones The University of Manchester Workshop on Protocols for Fast Long-Distance Networks Session: Close to Hardware

2 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester uUDP/IP packets sent between back-to-back systems Processed in a similar manner to TCP/IP Not subject to flow control & congestion avoidance algorithms Used UDPmon test program uLatency uRound trip times measured using Request-Response UDP frames uLatency as a function of frame size Slope s given by: Mem-mem copy(s) + pci + Gig Ethernet + pci + mem-mem copy(s) Intercept indicates processing times + HW latencies uHistograms of ‘singleton’ measurements uTells us about: Behavior of the IP stack The way the HW operates Interrupt coalescence The Latency Measurements Made

3 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester The Throughput Measurements Made (1) uUDP Throughput uSend a controlled stream of UDP frames spaced at regular intervals Zero stats OK done ●●● Get remote statistics Send statistics: No. received No. lost + loss pattern No. out-of-order CPU load & no. int 1-way delay Send data frames at regular intervals ●●● Time to send Time to receive Inter-packet time (Histogram) Signal end of test OK done n bytes Number of packets Wait time time 

4 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester The Throughput Measurements Made (2) uUDP Throughput uSend a controlled stream of UDP frames spaced at regular intervals uVary the frame size and the frame transmit spacing uAt the receiver record The time of first and last frames received The number packets received, the number lost, number out of order The received inter-packet spacing is histogramed The time each packet is received  provides packet loss pattern CPU load, Number of interrupts uUse the Pentium CPU cycle counter for times and delay Few lines of user code uTells us about: Behavior of the IP stack The way the HW operates Capacity and Available throughput of the LAN / MAN / WAN

5 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester The PCI Bus & Gigabit Ethernet Measurements uPCI Activity uLogic Analyzer with PCI Probe cards in sending PC Gigabit Ethernet Fiber Probe Card PCI Probe cards in receiving PC Gigabit Ethernet Probe CPU mem chipset NIC CPU mem NIC chipset Logic Analyser Display

6 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester uExamine the behaviour of different NICs uNice example of running at 33 MHz uQuick look at some new Server boards

7 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: Latency: SysKonnect PCI:32 bit 33 MHz Latency small 62 µs & well behaved Latency Slope 0.0286 µs/byte Expect: 0.0232 µs/byte PCI0.00758 GigE0.008 PCI0.00758 PCI:64 bit 66 MHz Latency small 56 µs & well behaved Latency Slope 0.0231 µs/byte Expect: 0.0118 µs/byte PCI0.00188 GigE0.008 PCI0.00188 n Possible extra data moves ? nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz nRedHat 7.1 Kernel 2.4.14

8 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: Throughput: SysKonnect PCI:32 bit 33 MHz Max throughput 584Mbit/s No packet loss >18 us spacing PCI:64 bit 66 MHz Max throughput 720 Mbit/s No packet loss >17 us spacing Packet loss during BW drop nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz nRedHat 7.1 Kernel 2.4.14

9 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: PCI: SysKonnect Motherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset CPU: PIII 800 MHz PCI:64 bit 66 MHz RedHat 7.1 Kernel 2.4.14 Receive transfer Send PCI Packet on Ethernet Fibre Send setup Send transfer Receive PCI 1400 bytes sent Wait 100 us ~8 us for send or receive Stack & Application overhead ~ 10 us / node ~36 us

10 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: PCI: SysKonnect 1400 bytes sent Wait 20 us 1400 bytes sent Wait 10 us nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz PCI:64 bit 66 MHz nRedHat 7.1 Kernel 2.4.14 Frames are back-to-back Can drive at line speed Cannot go any faster ! Frames on Ethernet Fiber 20 us spacing

11 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: Latency: Intel Pro/1000 Latency high but well behaved Indicates Interrupt coalescence Slope 0.0187 us/byte Expect: PCI0.00188 GigE0.008 PCI0.00188 0.0118 us/byte nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz PCI:64 bit 66 MHz nRedHat 7.1 Kernel 2.4.14

12 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: Throughput: Intel Pro/1000 Max throughput 910 Mbit/s No packet loss >12 us spacing Packet loss during BW drop CPU load 65-90% spacing < 13 us nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz PCI:64 bit 66 MHz nRedHat 7.1 Kernel 2.4.14

13 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: PCI: Intel Pro/1000 nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz PCI:64 bit 66 MHz nRedHat 7.1 Kernel 2.4.14 nRequest – Response nDemonstrates interrupt coalescence nNo processing directly after each transfer

14 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: PCI: Intel Pro/1000 1400 bytes sent Wait 11 us ~4.7us on send PCI bus PCI bus ~43% occupancy ~ 3.25 us on PCI for data recv ~ 30% occupancy 1400 bytes sent Wait 11 us Action of pause packets nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz PCI:64 bit 66 MHz nRedHat 7.1 Kernel 2.4.14

15 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: Throughput: Alteon PCI:64 bit 33 MHz Max throughput 674Mbit/s Packet loss < 10 us spacing PCI:64 bit 66 MHz Max throughput 930 Mbit/s Packet loss < 10 us spacing Packet loss during BW drop nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz nRedHat 7.1 Kernel 2.4.14

16 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro 370DLE: PCI: Alteon nMotherboard: SuperMicro 370DLE Chipset: ServerWorks III LE Chipset nCPU: PIII 800 MHz nRedHat 7.1 Kernel 2.4.14 Send PCI Receive PCI PCI:64 bit 33 MHz 1400 byte packets Signals nice and clean PCI:64 bit 66 MHz 1400 byte packets Spacing 16 us NIC  mem transfer pauses slows down the transfer Send PCI Receive PCI Receive transfer

17 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester IBM das: Throughput: Intel Pro/1000 Max throughput 930Mbit/s No packet loss > 12 us Clean behaviour Packet loss during drop nMotherboard: IBM das Chipset:: ServerWorks CNB20LE nCPU: Dual PIII 1GHz PCI:64 bit 33 MHz nRedHat 7.1 Kernel 2.4.14

18 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester IBM das: PCI: Intel Pro/1000 1400 bytes sent 11 us spacing Signals clean ~9.3us on send PCI bus PCI bus ~82% occupancy ~ 5.9 us on PCI for data recv. nMotherboard: IBM das Chipset:: ServerWorks CNB20LE nCPU: Dual PIII 1GHz PCI:64 bit 33 MHz nRedHat 7.1 Kernel 2.4.14

19 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP6: Latency Intel Pro/1000 Some steps Slope 0.009 us/byte Slope flat sections : 0.0146 us/byte Expect 0.0118 us/byte No variation with packet size FWHM 1.5 us Confirms timing reliable Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) CPU: Dual Xeon Prestonia 2.2 GHz PCI, 64 bit, 66 MHz RedHat 7.2 Kernel 2.4.19

20 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP6: Throughput Intel Pro/1000 Max throughput 950Mbit/s No packet loss CPU utilisation on the receiving PC was ~ 25 % for packets > than 1000 bytes 30- 40 % for smaller packets Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) CPU: Dual Xeon Prestonia 2.2 GHz PCI, 64 bit, 66 MHz RedHat 7.2 Kernel 2.4.19

21 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP6: PCI Intel Pro/1000 1400 bytes sent Wait 12 us ~5.14us on send PCI bus PCI bus ~68% occupancy ~ 3 us on PCI for data recv CSR access inserts PCI STOPs NIC takes ~ 1 us/CSR CPU faster than the NIC ! Similar effect with the SysKonnect NIC Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) CPU: Dual Xeon Prestonia 2.2 GHz PCI, 64 bit, 66 MHz RedHat 7.2 Kernel 2.4.19

22 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP8-G2: Throughput SysKonnect Max throughput 990Mbit/s New Card cf other tests 20- 30% utilisation Sender ~30 % utilisation Receiver nMotherboard: SuperMicro P4DP8-G2 Chipset: Intel E7500 (Plumas) nCPU: Dual Xeon Prestonia 2.4 GHz PCI:64 bit 66 MHz nRedHat 7.3 Kernel 2.4.19

23 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP8-G2: Throughput Intel onboard Max throughput 995Mbit/s No packet loss 20% CPU utilisation receiver packets > 1000 bytes 30% CPU utilisation smaller packets nMotherboard: SuperMicro P4DP8-G2 Chipset: Intel E7500 (Plumas) nCPU: Dual Xeon Prestonia 2.4 GHz PCI-X:64 bit nRedHat 7.3 Kernel 2.4.19

24 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Futures & Work in Progress uDual Gigabit Ethernet controllers uMore detailed study of PCI-X uInteraction of multiple PCI Gigabit flows uWhat happens when you have disks? u10 Gigabit Ethernet NICs u

25 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester uAll NICs & motherboards were stable – 1000s GBytes of transfers uAlteon could handle 930 Mbit/s on 64bit/66MHz uSysKonnect gave 720-876 Mbit/s improving to 876-990 Mbit/s on later m.boards uIntel gave 910 – 950 Mbit/s and 950-995 Mbit/s on later m.boards uPCI and GigEthernet signals show 800 MHz CPU can drive large packets at line speed. uMore CPU power is required for receiving – loss due to IP discards Rule of thumb at least 1 GHz CPU power free for 1 Gbit uTimes for DMA transfers scale with PCI bus speed but CSR access is constant New PCI-X and on-board controllers are better uBuses: 64 bit 66MHz PCI or faster PCI-X are required for performance 32 bit 33 MHz PCI bus is REALLY busy !! 64bit 33 MHz are > 80% used Summary & Conclusions (1)

26 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester uThe NICs should be well designed: Use advanced PCI commands Chipset will then make efficient use of memory CSRs well designed – minimum no of accesses uThe drivers need to be well written: CSR access / Clean management of buffers / Good interrupt handling u Worry about the CPU-Memory bandwidth as well as the PCI bandwidth Data crosses the CPU bus several times uSeparate the data transfers – use m.boards with multiple PCI buses uOS must be up to it too !! Summary, Conclusions & Thanks

27 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Throughput Measured for 1472 byte Packets NIC Motherboard Alteon AceNIC SysKonnect SK-9843 IntelPro1000 SuperMicro 370DLE; Chipset: ServerWorks III LE PCI 32bit 33 MHz RedHat 7.1 Kernel 2.4.14 674 Mbit/s584 Mbit/s 0-0 µs SuperMicro 370DLE Chipset: ServerWorks III LE PCI 64bit 64 MHz RedHat 7.1 Kernel 2.4.14 930 Mbit/s720 Mbit/s 0-0 µs 910 Mbit/s 400-120 µs IBM das Chipset: CNB20LE; PCI 64bit 32 MHz RedHat 7.1 Kernel 2.4.14 790 Mbit/s 0-0 µs 930 Mbit/s 400-120 µs SuperMicro P4DP6 Chipset: Intel E7500; PCI 64bit 64 MHz RedHat 7.2 Kernel 2.4.19-SMP 876 Mbit/s 0-0 µs 950 Mbit/s 70-70 µs SuperMicro P4DP8-G2 Chipset: Intel E7500; PCI 64bit 64 MHz RedHat 7.2 Kernel 2.4.19-SMP 990 Mbit/s 0-0 µs 995 Mbit/s 70-70 µs

28 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester The SuperMicro P4DP6 Motherboard uDual Xeon Prestonia (2cpu/die) u 400 MHx Front side bus u Intel® E7500 Chipset u 6 PCI-X slots u 4 independent PCI buses u Can select: 64 bit 66 MHz PCI 100 MHz PCI-X 133 MHz PCI-X u 2 100 Mbit Ethernet u Adaptec AIC-7899W dual channel SCSI u UDMA/100 bus master/EIDE channels data transfer rates of 100 MB/sec burst u P4DP8-2G dual Gigabit Ethernet

29 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester More Information Some URLs  UDPmon / TCPmon kit + writeup http://www.hep.man.ac.uk/~rich/net  ATLAS Investigation of the Performance of 100Mbit and Gigabit Ethernet Components Using Raw Ethernet Frames http:// www.hep.man.ac.uk/~rich/atlas/atlas_net_note_draft5.pdf  DataGrid WP7 Networking: http://www.gridpp.ac.uk/wp7/index.html  Motherboard and NIC Tests: www.hep.man.ac.uk/~rich/net/nic/GigEth_tests_Boston.ppt  IEPM-BW site: http://www-iepm.slac.stanford.edu/bw

30 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP6: Latency: SysKonnect nMotherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) nCPU: Dual Xeon Prestonia 2.2 GHz PCI:64 bit 66 MHz nRedHat 7.3 Kernel 2.4.19 Latency low Interrupts every packet Latency well behaved Slope 0.0199 us/byte Expect: PCI0.00188 GigE0.008 PCI0.00188 0.0118 us/byte

31 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP6: Throughput: SysKonnect Max throughput 876Mbit/s Big improvement Loss not due to user  Kernel moves Loss traced to “indiscards” in the receiving IP layer CPU utilisation on the receiving PC was ~ 25 % for packets > than 1000 bytes 30- 40 % for smaller packets nMotherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) nCPU: Dual Xeon Prestonia 2.2 GHz PCI:64 bit 66 MHz nRedHat 7.3 Kernel 2.4.19

32 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP6: PCI: SysKonnect 1400 bytes sent DMA transfers clean PCI STOP signals when accessing the NIC CSRs NIC takes ~ 0.7us/CSR CPU faster than the NIC ! Motherboard: SuperMicro P4DP6 Chipset: Intel E7500 (Plumas) CPU: Dual Xeon Prestonia 2.2 GHz PCI, 64 bit, 66 MHz RedHat 7.2 Kernel 2.4.19 Send PCI Receive PCI PCI STOP

33 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester SuperMicro P4DP8-G2: Latency SysKonnect nMotherboard: SuperMicro P4DP8-G2 Chipset: Intel E7500 (Plumas) nCPU: Dual Xeon Prestonia 2.4 GHz PCI:64 bit 66 MHz nRedHat 7.3 Kernel 2.4.19 Latency low Interrupt every packet Several steps Slope 0.022 us/byte Expect: 0.0118 us/byte PCI0.00188 GigE0.008 PCI0.00188 Plot smooth for PC switch PC ! Slope 0.028 us/byte

34 PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Interrupt Coalescence: Throughput Intel Pro 1000 on 370DLE


Download ppt "PFLDNet Workshop February 2003 R. Hughes-Jones Manchester Some Performance Measurements Gigabit Ethernet NICs & Server Quality Motherboards Richard Hughes-Jones."

Similar presentations


Ads by Google