Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Networks & Digital Lab project. In cooperation with Mellanox Technologies Ltd. Guided by: Crupnicoff Diego & Gurewitz Omer. Students: Cohen Erez,

Similar presentations


Presentation on theme: "Computer Networks & Digital Lab project. In cooperation with Mellanox Technologies Ltd. Guided by: Crupnicoff Diego & Gurewitz Omer. Students: Cohen Erez,"— Presentation transcript:

1 Computer Networks & Digital Lab project. In cooperation with Mellanox Technologies Ltd. Guided by: Crupnicoff Diego & Gurewitz Omer. Students: Cohen Erez, Gindi Nimrod & Krig Amit.

2 Thanks !!!!! תודות ברצוננו להודות לכל האנשים שבלעדיהם לא היה פרויקט זה יוצא לפועל ובעזרת הדרכתם וסיועם הגענו עד כאן : יורם אור-חן (מעבדה לרשתות מחשבים) – על התמיכה והלבביות לאורך כל הדרך. אלי שושן ( מעבדה ספרתית ) – על ההכוונה להתמקד בכיוונים ובנושאים המעניינים באמת. יורם יחיה (מעבדה לרשתות מחשבים) – על העזרה הרבה בתיאום כל שלבי הפרוייקט. עומר גורביץ ודייגו קופרניקוף – על ההדרכה, ההנחיה והעזרה הרבה בכל שלבי הפרוייקט. ברצוננו להודות באופו מיוחד לשי כהן,סמנכ"ל תפעול בחברת מלאנוקס טכנולוגיות, על הזמן והמשאבים הרבים (והיקרים) שאפשר לנו על מנת שנוכל להשלים את הפרוייקט

3 Projects Motivation. Projects objective. InfiniBand short preview (10 min). Projects System overview : Hardware Project description (10 min). Software Project description (10 min). Projects Demonstration Tools and process explanation (10 min) Agilent IB tracer usage demonstration. Specific InfiniBand Terms to be used. Specific patterns to be sent explanation. Mellanox InfiniBand Development tool to be used. AGENDA

4 Projects Demonstration (20 min) Installation Process of the System in ‘Virgin’ system. Sending Various InfiniBand Packets from the Ibgenerator to Agilent’s IB tracer. Demonstration of full InfiniBand system flow ( Sending InfiniBand Packets from the Ibgenerator through Agilent’s IB tracer to Mellanox Infinibridge) AGENDA – Cont’

5 The InfiniBand world is still in his early days – and as such it requires all sorts of supporting equipment. As we will show you, analyzing equipment already existed when we started thinking on this project, what we could not locate is dedicated device which purpose is transmitting InfiniBand packets ( and we still can not find such device in the market !!!! – although several companies declared almost a year ago that they are working on this issue ). That, and our desire to learn more about the InfiniBand standard, brought us to the idea of this project. Projects Motivation

6 Learning and understanding deeply, the new InfiniBand™ Architecture. Implementation of packet generator / transmitter focusing on InfiniBand™ protocol (release 1.0a). Projects objective

7 InfiniBand short preview (Refreshing our memory) In order to refresh our memory without saying what was already presented in former presentations of these projects we will review here the InfiniBand architecture by going over it in ‘title’ level. The InfiniBand Standard

8 InfiniBand Overview Switch fabric Concurrent data transfer No mechanical constrains Performance High bandwidth (2.5 to 30Gbit/sec) Fast I/O access by application QOS Packet-granularity b/w allocation Packet-granularity latency decision Switch End Node Switch End Node End Node End Node End Node End Node End Node End Node End Node End Node

9 The InfiniBand Architecture Model Router Network or IB IB Link Sys Mem CPU CPU Mem Cntlr HCA IB Link Switch TCA Target TCA Target Host Interconnect Router IB Link Host Channel Adapters for computing platforms Target Channel Adapters for Specialized Subsystems Subnets consist of Links & Switches Routers enable inter-subnet communications while providing subnet isolation

10 InfiniBand System Architecture Decouples CPU and I/O OS-independent. OS-independent. Scope – from “PCI” to WAN Same ‘look and feel’ for local or remote nodes. Same ‘look and feel’ for local or remote nodes. Wide cost/performance range for implementations. Wide cost/performance range for implementations.

11 InfiniBand Overview cont’ Reliability Reliable transport service in HW Automatic path migration in HW (fault tolerance) Scalability/flexibility Up to 64K nodes in subnet, up to 2 128 nodes in network Multiple link width/trace (Cu, Fiber) Auto-negotiation of link width and transfer rate

12 InfiniBand architecture. InfiniBand is a Layered architecture. Similar to an IP network, each layer supply services to the higher layer. An IB packet is build from headers added by each layer. The layers responsibilities are: Insure correct routing of a packet. Insure correct routing of a packet. Insure correct data. Insure correct data. Insure QOS. Insure QOS. And more …. And more ….

13 InfiniBand architecture cont’. IB End node Application Upper Layer protocols Transport Layer Network Layer Link Layer Physical Layer Application Upper Layer protocols Transport Layer Network Layer Link Layer Physical Layer IB Switch Packet relay PHY IB Router Packet relay PHY Link PHY Link Packet relay PHY Link PHY Link Legacy Router

14 InfiniBand architecture features. Multiple transport services Reliable and unreliable Reliable and unreliable Connected and datagram Connected and datagram Enables memory exposure to remote node RDMA-read and RDMA-write RDMA-read and RDMA-write Enables network partitioning Partition key and routing programming Partition key and routing programming Enables user-level access to I/O Adapter validates access rights Adapter validates access rights Adapter translates memory address Adapter translates memory address

15 InfiniBand architecture features cont’. Enables dynamic load balancing Within end-node or in the fabric Within end-node or in the fabric Multiple levels of QOS decisions

16 Hardware description

17 Hardware system description I2C connector Xilinx xcv 400e FPGA (125 MHz DDR) IB port 2.5 GB/sec Agilent ’ s SerDes 10  1 I2C interface transmitter JTAG connector Oscillator Power unit PCI connector for power and reset Mictor

18 System interface The board is a standard PCI form factor. The board is a standard PCI form factor. The board contain the following interfaces: The board contain the following interfaces:  I2C – software interface: This interface is used to load data (10 bit for each byte, by InfiniBand spec), commands and to control the Ibgenerator. We have used this interface because it’s a very simple and cheap solution.  InfiniBand connector – This is the interface to the InfiniBand fabric. We use 1x connector according to the InfiniBand spec.

19 System interface  JTAG – This is a common interface to connect to FPGA’s. In our board we used the JTAG interface to program the Xilinx FPGA and to debug the Verilog code.  PCI interface – We didn’t use a “real” PCI interface. We have used this interface only to get power from a PC and to get reset signal.

20 System flow Data is received from the I2C interface to the FPGA unit. Data is received from the I2C interface to the FPGA unit. Data can be written to 3 different location in the FPGA: Data can be written to 3 different location in the FPGA:  Data array – This is a 256X32 bit array, it’s holding the data to be transmitted to the IB fabric.  Command array – This is a 32X32 bit array, it’s holding in each row (each 32 bit) a command to execute. Each command hold the following information:  Pointer to the start address in the data array.  Pointer to the end address in the data array.

21 System flow con’t  Times to transmit this data.  Pointer to the next command to execute.  Status register – This is the “go” command of the system. To start a transmission we should write the address of the command to be execute and the FPGA will start sending this data.

22 System flow con’t – Send TS1 Command Array Data Array Status TS1 Data To SerDes

23 System flow con’t Once the FPGA starts to work on a command, it will send 10 bits of data in a rate of 125 MHz DDR to the SerDes. It will also send TBC (Transmit Byte Clock) signal. Once the FPGA starts to work on a command, it will send 10 bits of data in a rate of 125 MHz DDR to the SerDes. It will also send TBC (Transmit Byte Clock) signal. The SerDes will send this data in a rate of 2.5 Gbit/sec in a differential lines. The SerDes will send this data in a rate of 2.5 Gbit/sec in a differential lines.

24 Signal integrity

25 Software description

26 SW description The software project contains 3 major libraries MPGA library I2C library IB generator library

27 MPGA library The MPGA (Management Packet Generator Analyzer) library provides software for generation and analysis of InfiniBand packets. This library is especially important when using the IB generator for sending and receiving all kinds of InfiniBand packets. Refer to mpga.h and the related files packet_append.h, packet_utilities and ib_opcodes.h for further details.

28 MPGA structure This library has 3 hierarchical levels: This library has 3 hierarchical levels:  The first level is the upper level containing the user interface functions.  The second level of the library is in charge of the InfiniBand packet generation building blocks.  The last level in MPGA is a hidden part of the library containing all of the internal functions used only by this library.

29 MPGA structure H ק' Headers in Level 1 Headers in level 2 Headers in level 3 User level Building blocks Internal functions

30 MPGA cont’ - Generating packet flow Raw data to send Building a transport packet PAYLOADBTHPAYLOAD IB End node Application Upper Layer protocols Transport Layer Network Layer Link Layer Physical Layer

31 MPGA cont’ - Generating packet flow Building a Link layer packet. IB End node Application Upper Layer protocols Transport Layer Network Layer Link Layer Physical Layer LRHBTHPAYLOADVCRCICRC

32 MPGA cont’ - Analyzing packet flow ???? Link layer of the incoming packet. LRH ???? ICRCVCRC Analyzing the transport layer of the incoming packet. LRH BTH DETHHPayloadICRC VCRC Payload pointer Packet size

33 Mpga cont’ – special features Endian proof Endian proof ICRC/VCRC calculation (Cyclic Redundancy Code ) ICRC/VCRC calculation (Cyclic Redundancy Code ) Error generation Error generation

34 IB generator library This library is the major driver for the Ibgenerator. This library is the major driver for the Ibgenerator. The library uses the I2C library to communicate with the Ibgenerator through the I2C master card (CALIBRE). The library uses the I2C library to communicate with the Ibgenerator through the I2C master card (CALIBRE). All of the basic structures of the Ibgenerator Lib is based on the FPGA I2C interface defined in InfiniBand project HW section. All of the basic structures of the Ibgenerator Lib is based on the FPGA I2C interface defined in InfiniBand project HW section.

35 IB generator library The main features are : The main features are : 8 to 10 Link phy section Sending TS1/TS2 (Tranning sequenc one/two). Logical link Sending Flow controls init and normal state. Packets Sending a regular IB packet. sending big buffers 4K MTU.

36 I2C library The I2C library is based on the API of The I2C library is based on the API of CALIBRE Company CALIBRE Company The library provides I2C interface to the The library provides I2C interface to the IB generator board. IB generator board.

37 Demonstration preview

38 IB tracer IB tracer Init the system. Init the system.  Ibgenerator device installation.  Caliber card installation.  Installation of Ibgenerator software package. Connect to the Agilent IB tracer. Connect to the Agilent IB tracer. Start the Ibgenerator GUI: Start the Ibgenerator GUI:  Send TS1.  Send TS2.  Send idle data.  Send credit packet (init and normal state).

39 Demonstration preview  Send the IB golden packet.  Send RDMA write.  Send Ack.  Send errors:  ICRC error  VCRC error  Packet length error.  Send big packet (4k MTU).

40 Demonstration preview Connect to Mellanox Infinibridge device through Agilent IB tracer. Connect to Mellanox Infinibridge device through Agilent IB tracer. Send the following packets: Send the following packets:  TS1  TS2.  Idle data.  Credit packet (init and normal state).  Show physical and logical link state. Send data packets to Mellanox device. Send data packets to Mellanox device.

41 Demonstration preview

42 Demonstration


Download ppt "Computer Networks & Digital Lab project. In cooperation with Mellanox Technologies Ltd. Guided by: Crupnicoff Diego & Gurewitz Omer. Students: Cohen Erez,"

Similar presentations


Ads by Google