Presentation is loading. Please wait.

Presentation is loading. Please wait.

Comprehensive environment for benchmarking using FPGAs: ATHENa - Automated Tool for Hardware EvaluatioN 1.

Similar presentations


Presentation on theme: "Comprehensive environment for benchmarking using FPGAs: ATHENa - Automated Tool for Hardware EvaluatioN 1."— Presentation transcript:

1 Comprehensive environment for benchmarking using FPGAs: ATHENa - Automated Tool for Hardware EvaluatioN 1

2 Modern Benchmarking: Natural Progression of Tools 2 SoftwareASICsFPGAs eBACS D. Bernstein, T. Lange ??

3 ATHENa – Automated Tool for Hardware EvaluatioN 3 Set of scripts written in Perl aimed at an AUTOMATED generation of OPTIMIZED results for MULTIPLE hardware platforms Currently under development at George Mason University. Version 0.3.1 http://cryptography.gmu.edu/athena

4 Why Athena? 4 "The Greek goddess Athena was frequently called upon to settle disputes between the gods or various mortals. Athena Goddess of Wisdom was known for her superb logic and intellect. Her decisions were usually well-considered, highly ethical, and seldom motivated by self-interest.” from "Athena, Greek Goddess of Wisdom and Craftsmanship"

5 Designers of ATHENa Venkata “Vinny” MS CpE student Ekawat “Ice” MS CpE student Marcin PhD ECE student Rajesh PhD ECE student Xin PhD ECE student Michal PhD exchange student from Slovakia

6 ATHENa Server FPGA Synthesis and Implementation Result Summary + Database Entries 2 3 HDL + scripts + configuration files 1 Database Entries Download scripts and configuration files8 Designer 4 HDL + FPGA Tools User Database query Ranking of designs 5 6 Basic Dataflow of ATHENa 0 Interfaces + Testbenches 6

7 7 synthesizable source files configuration files testbench constraint files result summary (user-friendly) result summary (user-friendly) database entries (machine- friendly) database entries (machine- friendly)

8 synthesizable source files configuration files result summary (user-friendly) result summary (user-friendly)

9 ATHENa Major Features (1) synthesis, implementation, and timing analysis in the batch mode support for devices and tools of multiple FPGA vendors: generation of results for multiple families of FPGAs of a given vendor automated choice of a best-matching device within a given family 9

10 ATHENa Major Features (2) automated verification of the design through simulation in the batch mode exhaustive search for optimum options of tools heuristic adaptive optimization strategies aimed at maximizing selected performance measures (e.g., speed, area, speed/area ratio, power, cost, etc.) OR 10

11 ATHENa Major Features (2) automated verification of the design through simulation in the batch mode exhaustive search for optimum options of tools heuristic adaptive optimization strategies aimed at maximizing selected performance measures (e.g., speed, area, speed/area ratio, power, cost, etc.) OR 11

12 12 Multi-Pass Place-and-Route Analysis GMU SHA-512, Xilinx Virtex 5 100 runs for different placement starting points The smaller the better ~ 20% best worst Minimum clock 12

13 13 Dependence of Results on Requested Clock Frequency

14 ATHENa Applications single_run: - one set of options placement_search - one set of options - multiple starting points for placement exhaustive_search - multiple sets of options - multiple starting points for placement - multiple requested clock frequencies

15 SHA-1 Results Throughput [Mbit/s] Architectures Virtex 5 Virtex 4 Spartan 3 15

16 ATHENA Results for SHA-1, SHA-256 & SHA-512 16

17 Ideas (1) 17 Select several representative FPGA platforms with significantly different properties e.g., different vendor – Xilinx vs. Altera process - 90 nm vs. 65 nm LUT size - 4-input vs. 6-input optimization - low-cost vs. high-performance Use ATHENa to characterize all SHA-3 candidates and SHA-2 using these platforms in terms of the target performance metrics (e.g. throughput/area ratio)

18 Ideas (2) 18 Calculate ratio SHA-3 candidate performance vs. SHA-2 performance (for the same security level) Calculate geometrical average over multiple platforms

19 TechnologyLow-costHigh- performance 120/150 nmVirtex 2, 2 Pro 90 nmSpartan 3Virtex 4 65 nmVirtex 5 45 nmSpartan 6 40 nmVirtex 6 Xilinx FPGA Devices

20 Xilinx FPGA Device Support by Tools VersionLow-costHigh-performance Xilinx ISE 10.1All up to Virtex 5 Xilinx WebPACK 11.1Smallest up to Virtex 5 Xilinx WebPACK 11.3Smallest up to Virtex 5 Smallest Spartan 6, Virtex 6 Smallest up to Virtex 5 Smallest Spartan 6, Virtex 6

21 Altera FPGA Devices TechnologyLow-costMid-rangeHigh- performance 130 nmCycloneStratix 90 nmCyclone IIStratix II 65 nmCyclone IIIArria IStratix III 40 nmCyclone IVArria IIStratix IV

22 Altera FPGA Device Support by Tools VersionLow-costMid-rangeHigh- performance Quartus 7.1Cyclone IV none, Cyclone III all Arria GX all Arria II GX none Stratix II smallest, Stratix III none Quartus 8.1Cyclone IV none, Cyclone III all Arria GX all Arria II GX none Stratix I, II, III smallest Quartus 9.0 sp2, Sep. 09 Cyclone IV none, Cyclone III all Arria GX all Arria II GX none Stratix I, II, III smallest Quartus 9.1 Nov. 09 Cyclone IV smallest, Cyclone III all Arria GX all Arria II GX smallest Stratix I, II, III all Stratix IV none

23 FPGA and ASIC Performance Measures 23

24 The common ground is vague Hardware Performance: cycles per block, cycles per byte, Latency (cycles), Latency (ns), Throughput for long messages, Throughput for short messages, Throughput at 100 KHz, Clock Frequency, Clock Period, Critical Path Delay, Modexp/s, PointMul/s Hardware Cost: Slices, Slices Occupied, LUTs, 4-input LUTs, 6-input LUTs, FFs, Gate Equivalent GE, Size on ASIC, DSP Blocks, BRAMS, Number of Cores, CLB, MUL, XOR, NOT, AND Hardware efficiency: Hardware performance/Hardware cost 24

25 25 Our Favorite Hardware Performance Metrics: Mbit/s for Throughput ns for Latency Allows for easy cross-comparison among implementations in software (microprocessors), FPGAs (various vendors), ASICs (various libraries)

26 26 But how to define and measure throughput and latency for hash functions? Time to hash N blocks of message = Htime(N, T CLK ) = Initialization Time(T CLK ) + N * Block Processing Time(T CLK ) + Finalization Time(T CLK ) Latency = Time to hash ONE block of message = Htime(1, T CLK ) = = Initialization Time + Block Processing Time + Finalization Time Throughput (for long messages) = Htime(N+1, T CLK ) - Htime(N, T CLK ) Block size = Block Processing Time (T CLK )

27 But how to define and measure throughput and latency for hash functions? Initialization Time(T CLK ) = cycles I ⋅ T CLK Block Processing Time(T CLK ) = cycles P ⋅ T CLK Finalization Time(T CLK ) = cycles F ⋅ T CLK Block size from specification from analysis of block diagram and/or functional simulation from place & route report (or experiment) 27

28 How to compare hardware speed vs. software speed? EBASH reports (http://bench.cr.yp.to/results-hash.html) In graphs Time(n) = Time in clock cycles vs. message size in bytes for n-byte messages, with n=0,1, 2, 3, … 2048, 4096 In tables Performance in cycles/byte for n=8, 64, 576, 1536, 4096, long msg Time(4096) – Time(2048) 2048 Performance for long message = 28

29 How to compare hardware speed vs. software speed? Throughput [Gbit/s] = Performance for long message [cycles/byte] 8 bits/byte ⋅ clock frequency [GHz] 29

30 30 How to measure hardware cost in FPGAs? 1. Stand-alone cryptographic core on FPGA 2. Part of an FPGA System On-Chip 3. FPGA prototype of an ASIC implementation Cost of a smallest FPGA that can fit the core. Unit: USD [FPGA vendors would need to publish MSRP (manufacturer’s suggested retail price) of their chips] – not very likely or size of the chip in mm 2 - easy to obtain Vector:(CLB slices, BRAMs, MULs, DSP units) for Xilinx (LEs, memory bits, PLLs, MULs, DSP units) for Altera Force the implementation using only reconfigurable logic (no DSPs or multipliers, distributed memory vs. BRAM): Use CLB slices as a metric. [LEs for Altera]

31 How to measure hardware cost in ASICs? 1. Stand-alone cryptographic core 2. Part of an ASIC System On-Chip Cost = f(die area, pin count) Tables/formulas available from semiconductor foundries Cost ~ circuit area Units: μm 2 or GE (gate equivalent) = size of a NAND2 cell 31

32 Deliverables (1) 1. Detailed block diagram of the Datapath with names of all signals matching VHDL code [electronic version a bonus] 2. Interface with the division into the Datapath and the Controller [electronic version] 3.ASM charts of the Controller, and a block diagram of connections among FSMs (if more than one used) [electronic version a bonus] 4. RTL VHDL code of the Datapath, the Controller, and the Top-Level Circuit 5. Updated timing and area analysis formulas for timing confirmed through simulation 32

33 Deliverables (2) 6. Report on verification − highest level entity verified for functional correctness Functional simulation Post-synthesis simulation Timing simulation [bonus] − verification of lower-level entities -Name of entity -Testbench used for verification -Result of verification, incorrect behavior, possible source of error 33

34 Deliverables (3) 7. Results of benchmarking using ATHENa – Entire core or the highest level entity verified for correct functionality – Xilinx Spartan 3, Virtex 4, Virtex 5 – Three methods of testing Single_run Placement_search [cost table = 1, 11, 21] Exhaustive_search [cost_table = 31, 41, 51; speed or area; two sets of requested frequencies] – Results generated by ATHENa – Your own graphs and charts – Observations and conclusions 34

35 Bonus Deliverables (4) 8. Pseudo-code [but not a C code] 9. Bugs and suspicious behavior of ATHENa 10. Additional results of benchmarking using ATHENa – Altera Cyclone II, Stratix II, Cyclone III, Arria I, Stratix III – Three methods of testing Single_run Placement_search [seed = 1, 1000, 2000] Exhaustive_search [seed = 3000, 4000, 5000; speed or area; two sets of requested frequencies] – Results generated by ATHENa – Your own graphs and charts – Observations and conclusions 35

36 Bonus Deliverables (5) 11. Report from the meeting with students working on the same SHA core – Summary of major differences – Advantages and disadvantages of your design 12. Bugs found in the – Padding script – Testbench – Class examples – Slides – Documentation – SHA-3 Packages – Etc. 36

37 Bonus Deliverables (6) 13. Extending the design to cover all hash function variants – Hash value sizes: 512 [highest priority], 384, 224 – Other variant/parameter support specific to a given hash function – Support through generics or constants 14. Padding in hardware Assuming that message size before padding is already a multiple of the – word size – byte size – a single bit 37

38 38 14 local students (with 3 former BSCpE graduates) 14 international students 4 GWU PhD candidates Composition of Students

39 After Grading 1.Summary of results published on the course web page 2.Selected students invited to develop articles/reports to be posted on the - ATHENa web page - SHA-3 Zoo Web Page 3.Unification, generalization and optimization of codes by Ice, myself, and other students 4.Presentation to NIST, conference submissions, presentation at the Second SHA-3 Conference in Santa Barbara in August 2010. 39


Download ppt "Comprehensive environment for benchmarking using FPGAs: ATHENa - Automated Tool for Hardware EvaluatioN 1."

Similar presentations


Ads by Google