Presentation is loading. Please wait.

Presentation is loading. Please wait.

EECC722 - Shaaban #1 lec # 7 Fall 2000 10-2-2000 What is Configurable Computing? Spatially-programmed connection of processing elements Spatially-programmed.

Similar presentations


Presentation on theme: "EECC722 - Shaaban #1 lec # 7 Fall 2000 10-2-2000 What is Configurable Computing? Spatially-programmed connection of processing elements Spatially-programmed."— Presentation transcript:

1 EECC722 - Shaaban #1 lec # 7 Fall 2000 10-2-2000 What is Configurable Computing? Spatially-programmed connection of processing elements Spatially-programmed connection of processing elements Customizing computation to a particular application by changing hardware functionality on the fly. “Hardware” customized to specifics of problem. Direct map of problem specific dataflow, control. Circuits “adapted” as problem requirements change.

2 EECC722 - Shaaban #2 lec # 7 Fall 2000 10-2-2000 Spatial vs. Temporal Computing Spatial Temporal

3 EECC722 - Shaaban #3 lec # 7 Fall 2000 10-2-2000 Why Configurable Computing? To improve performance over a software implementation – e.g. signal processing apps in configurable hardware To improve product flexibility compared to hardware – e.g. encryption or network protocols in configurable hardware To use the same hardware for different purposes at different points in the computation.

4 EECC722 - Shaaban #4 lec # 7 Fall 2000 10-2-2000 Configurable Computing Application Areas Signal processing Encryption Low-power (through hardware "sharing") Variable precision arithmetic Logic-intensive applications In-the-field hardware enhancements Adaptive (learning) hardware elements

5 EECC722 - Shaaban #5 lec # 7 Fall 2000 10-2-2000 Configurable Computing Application: Sample Configurable Computing Application: Prototype Video Communications System Uses a single FPGA to perform four functions that typically require separate chips. A memory chip stores the four circuit configurations and loads them sequentially into the FPGA. Initially, the FPGA's circuits are configured to acquire digitized video data. The chip is then rapidly reconfigured to transform the video information into a compressed form and reconfigured again to prepare it for transmission. Finally, the FPGA circuits are reconfigured to modulate and transmit the video information. At the receiver, the four configurations are applied in reverse order to demodulate the data, uncompress the image and then send it to a digital-to-analog converter so it can be displayed on a television screen.

6 EECC722 - Shaaban #6 lec # 7 Fall 2000 10-2-2000 Early Configurable Computing Successes Fastest RSA implementation is on a reconfigurable machine (DEC PAM) Splash2 (SRC) performs DNA Sequence matching 300x Cray2 speed, and 200x a 16K CM2 Many modern processors and ASICs are verified using FPGA emulation systems For many signal processing/filtering operations, single chip FPGAs outperform DSPs by 10-100x.

7 EECC722 - Shaaban #7 lec # 7 Fall 2000 10-2-2000 Defining Terms Computes one function (e.g. FP-multiply, divider, DCT) Function defined at fabrication time Computes “any” computable function (e.g. Processor, DSPs, FPGAs) Function defined after fabrication Fixed Function: Programmable: Parameterizable Hardware: Performs limited “set” of functions

8 EECC722 - Shaaban #8 lec # 7 Fall 2000 10-2-2000 Conventional Programmable Processors Vs. Configurable devices Conventional Programmable Processors Moderately wide datapath which have been growing larger over time (e.g. 16, 32, 64, 128 bits), Support for large on-chip instruction caches which have been also been growing larger over time and can now hold hundreds to thousands of instructions High bandwidth instruction distribution so that several instructions may be issued per cycle at the cost of dedicating considerable die area for instruction distribution A single thread of computation control. Configurable devices (such as FPGAs): Narrow datapath (e.g. almost always one bit), On-chip space for only one instruction per compute element -- i.e. the single instruction which tells the FPGA array cell what function to perform and how to route its inputs and outputs Minimal die area dedicated to instruction distribution such that it takes hundreds of thousands of compute cycles to change the active set of array instructions

9 EECC722 - Shaaban #9 lec # 7 Fall 2000 10-2-2000 Field programmable gate arrays (FPGAs) Chip contains many small building blocks that can be configured to implement different functions. These building blocks are known as CLBs (Configurable Logic Blocks) FPGAs typically "programmed" by having them read in a stream of configuration information from off-chip – Typically in-circuit programmable (As opposed to EPLDs which are typically programmed by removing them from the circuit and using a PROM programmer) 25% of an FPGA's gates are application-usable –The rest control the configurability, etc. As much as 10X clock rate degradation compared to custom hardware implementation Typically built using SRAM fabrication technology Since FPGAs "act" like SRAM or logic, they lose their program when they lose power. Configuration bits need to be reloaded on power-up. Usually reloaded from a PROM, or downloaded from memory via an I/O bus.

10 EECC722 - Shaaban #10 lec # 7 Fall 2000 10-2-2000 Programmable Circuitry Programmable circuits in a field-programmable gate array (FPGA) can be created or removed by sending signals to gates in the logic elements. A built-in grid of circuits arranged in columns and rows allows the designer to connect a logic element to other logic elements or to an external memory or microprocessor. The logic elements are grouped in blocks that perform basic binary operations such as AND, OR and NOT Several firms, including Xilinx and Altera, have developed devices with the capability of 100,000 equivalent gates.

11 EECC722 - Shaaban #11 lec # 7 Fall 2000 10-2-2000 Look-Up Table (LUT) In Out 00 0 01 1 10 1 11 0 2-LUT Mem In1 In2 Out

12 EECC722 - Shaaban #12 lec # 7 Fall 2000 10-2-2000 LUTs K-LUT -- K input lookup table Any function of K inputs by programming table

13 EECC722 - Shaaban #13 lec # 7 Fall 2000 10-2-2000 Conventional FPGA Tile K-LUT (typical k=4) w/ optional output Flip-Flop

14 EECC722 - Shaaban #14 lec # 7 Fall 2000 10-2-2000 XC4000 CLB Cascaded 4 LUTs (2 4-LUTs -> 1 3-LUT)

15 EECC722 - Shaaban #15 lec # 7 Fall 2000 10-2-2000 Density Comparison

16 EECC722 - Shaaban #16 lec # 7 Fall 2000 10-2-2000 Processor vs. FPGA Area

17 EECC722 - Shaaban #17 lec # 7 Fall 2000 10-2-2000 Processors and FPGAs

18 EECC722 - Shaaban #18 lec # 7 Fall 2000 10-2-2000 Programming/Configuring FPGAs Software (e.g. XACT or other tools) converts a design to netlist format. XACT: –Partitions the design into logic blocks –Then finds a good placement for each block and routing between them (PPR) Then a serial bitstream is generated and fed down to the FPGAs themselves The configuration bits are loaded into a "long shift register" on the FPGA. The output lines from this shift register are control wires that control the behavior of all the CLBs on the chip.

19 EECC722 - Shaaban #19 lec # 7 Fall 2000 10-2-2000 Configurable Computing Configurable Computing Architectures Configurable ComputingConfigurable Computing architectures combine elements of general-purpose computing and application-specific integrated circuits (ASICs). The general-purpose processor operates with fixed circuits that perform multiple tasks under the control of software. An ASIC contains circuits specialized to a particular task and thus needs little or no software to instruct it. The configurable computer can execute software commands that alter its FPGA circuits as needed to perform a variety of jobs.

20 EECC722 - Shaaban #20 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture Computer Combines a general-purpose microprocessor and reconfigurable FPGA chips. A controller FPGA loads circuit configurations stored in the memory onto the processor FPGA in response to the requests of the operating program. If the memory does not contain a requested circuit, the processor FPGA sends a request to the PC host, which then loads the configuration for the desired circuit. Common Hybrid Configurable Architecture Today: – FPGA array on board connected to I/O bus Future Hybrid Configurable Architecture: –Integrate a region of configurable hardware (FPGA or something else?) onto processor chip itself –Integrate configurable hardware onto DRAM chip=> Flexible computing without memory bottleneck

21 EECC722 - Shaaban #21 lec # 7 Fall 2000 10-2-2000 Benefits of Re-Configurable Logic Devices Non-permanent customization and application development after fabrication –“Late Binding” economies of scale (amortize large, fixed design costs) time-to-market (evolving requirements and standards, new ideas) Disadvantages Efficiency penalty (area, performance, power) Correctness Verification

22 EECC722 - Shaaban #22 lec # 7 Fall 2000 10-2-2000 Spatial/Configurable Benefits 10x raw density advantage over processors Potential for fine-grained (bit-level) control --- can offer another order of magnitude benefit Locality. Each compute/interconnect resource dedicated to single function Must dedicate resources for every computational subtask Infrequently needed portions of a computation sit idle --> inefficient use of resources Spatial/Configurable Drawbacks

23 EECC722 - Shaaban #23 lec # 7 Fall 2000 10-2-2000 Technology Trends Driving Configurable Computing Increasing gap between "peak" performance of general-purpose processors and "average actually achieved" performance. –Most programmers don't write code that gets anywhere near the peak performance of current superscalar CPUs Improvements in FPGA hardware: capacity and speed: –FPGAs use standard SRAM processes and "ride the commodity technology" curve –Volume pricing even though customized solution Improvements in synthesis and FPGA mapping/routing software Increasing number of transistors on a (processor) chip: How to use them all? –Bigger caches. –SMT –IRAM –Multiple processors. –FPGA!

24 EECC722 - Shaaban #24 lec # 7 Fall 2000 10-2-2000 Overall Configurable Hardware Approach Select portions of an application where hardware customizations will offer an advantage Map those application phases to FPGA hardware – hand-design – VHDL => synthesis If it doesn't fit in FPGA, re-select application phase (smaller) and try again. Perform timing analysis to determine rate at which configurable design can be clocked. Write interface software for communication between main processor and configurable hardware – Determine where input / output data communicated between software and configurable hardware will be stored –Write code to manage its transfer (like a procedure call interface in standard software) –Write code to invoke configurable hardware (e.g. memory-mapped I/O) Compile software (including interface code) Send configuration bits to the configurable hardware Run program.

25 EECC722 - Shaaban #25 lec # 7 Fall 2000 10-2-2000 Configurable Hardware Configurable Hardware Application Challenges This process turns applications programmers into part- time hardware designers. Performance analysis problems => what should we put in hardware? Choice and granularity of computational elements. Choice and granularity of interconnect network. Hardware-Software Co-design problem Synthesis problems Testing/reliability problems.

26 EECC722 - Shaaban #26 lec # 7 Fall 2000 10-2-2000 The Choice of the Computational Elements ReconfigurableLogicReconfigurableDatapathsReconfigurableArithmeticReconfigurableControl Bit-Level Operations e.g. encoding Dedicated data paths e.g. Filters, AGU Arithmetic kernels e.g. Convolution RTOS Process management

27 EECC722 - Shaaban #27 lec # 7 Fall 2000 10-2-2000 Reconfigurable Processor Tools Flow Customer Application / IP (C code) ARC Object Code C Compiler RTL HDL Linker Chameleon Executable C Model Simulator Configuration Bits Synthesis & Layout C Debugger Development Board

28 EECC722 - Shaaban #28 lec # 7 Fall 2000 10-2-2000 Hardware Challenges in using FPGAs for Configurable Computing Configuration overhead I/O bandwidth Speed, power, cost, density High-level language support Performance, Space estimators Design verification Partitioning and mapping across several FPGAs

29 EECC722 - Shaaban #29 lec # 7 Fall 2000 10-2-2000 Configurable Hardware Research PRISM (Brown) PRISC (Harvard) DPGA-coupled uP (MIT) GARP, Pleiades, … (UCB) OneChip (Toronto) REMARC (Stanford) NAPA (NSC) E5 etc. (Triscend)

30 EECC722 - Shaaban #30 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models Unaffected by array logic: Interfacing Dedicated IO Processor. Instruction Augmentation: –Special Instructions / Coprocessor Ops –VLIW/microcoded extension to processor –Configurable Vector unit Autonomous co/stream processor

31 EECC722 - Shaaban #31 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models: Interfacing Logic used in place of –ASIC environment customization –external FPGA/PLD devices Example –bus protocols –peripherals –sensors, actuators Case for: –Always have some system adaptation to do –Modern chips have capacity to hold processor + glue logic –reduce part count –Glue logic vary –valued added must now be accommodated on chip (formerly board level)

32 EECC722 - Shaaban #32 lec # 7 Fall 2000 10-2-2000 Example: Interface/Peripherals Triscend E5

33 EECC722 - Shaaban #33 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models: IO Processor Array dedicated to servicing IO channel –sensor, lan, wan, peripheral Provides –protocol handling –stream computation compression, encrypt Looks like IO peripheral to processor Maybe processor can map in –as needed –physical space permitting Case for: –many protocols, services –only need few at a time –dedicate attention, offload processor

34 EECC722 - Shaaban #34 lec # 7 Fall 2000 10-2-2000 NAPA 1000 Block Diagram RPC Reconfigurable Pipeline Cntr ALP Adaptive Logic Processor System Port TBT ToggleBus TM Transceiver PMA Pipeline Memory Array CR32 CompactRISC TM 32 Bit Processor BIU Bus Interface Unit CR32 Peripheral Devices External Memory Interface SMA Scratchpad Memory Array CIO Configurable I/O

35 EECC722 - Shaaban #35 lec # 7 Fall 2000 10-2-2000 NAPA 1000 as IO Processor SYSTEM HOST NAPA1000 ROM & DRAM Application Specific Sensors, Actuators, or other circuits System Port CIO Memory Interface

36 EECC722 - Shaaban #36 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models: Instruction Augmentation Observation: Instruction Bandwidth –Processor can only describe a small number of basic computations in a cycle I bits  2 I operations –This is a small fraction of the operations one could do even in terms of w  w  w Ops w2 2 (2w) operations –Processor could have to issue w2 (2 (2w) -I) operations just to describe some computations –An a priori selected base set of functions could be very bad for some applications

37 EECC722 - Shaaban #37 lec # 7 Fall 2000 10-2-2000 Instruction Augmentation Idea: –provide a way to augment the processor’s instruction set –with operations needed by a particular application –close semantic gap / avoid mismatch What’s required: –some way to fit augmented instructions into stream –execution engine for augmented instructions if programmable, has own instructions –interconnect to augmented instructions

38 EECC722 - Shaaban #38 lec # 7 Fall 2000 10-2-2000 First Efforts In Instruction Augmentation PRISM –Processor Reconfiguration through Instruction Set Metamorphosis PRISM-I –68010 (10MHz) + XC3090 –can reconfigure FPGA in one second! –50-75 clocks for operations

39 EECC722 - Shaaban #39 lec # 7 Fall 2000 10-2-2000 PRISM (Brown) FPGA on bus access as memory mapped peripheral explicit context management some software discipline for use …not much of an “architecture” presented to user

40 EECC722 - Shaaban #40 lec # 7 Fall 2000 10-2-2000 PRISM-1 Results Raw kernel speedups

41 EECC722 - Shaaban #41 lec # 7 Fall 2000 10-2-2000 PRISC (Harvard) Takes next step –what look like if we put it on chip? –how integrate into processor ISA? Architecture: –couple into register file as “superscalar” functional unit –flow-through array (no state)

42 EECC722 - Shaaban #42 lec # 7 Fall 2000 10-2-2000 PRISC ISA Integration –Add expfu instruction –11 bit address space for user defined expfu instructions –fault on pfu instruction mismatch trap code to service instruction miss –all operations occur in clock cycle –easily works with processor context switch no state + fault on mismatch pfu instr

43 EECC722 - Shaaban #43 lec # 7 Fall 2000 10-2-2000 PRISC Results All compiled working from MIPS binary <200 4LUTs ? –64x3 200MHz MIPS base

44 EECC722 - Shaaban #44 lec # 7 Fall 2000 10-2-2000 Chimaera (Northwestern) Start from PRISC idea –integrate as functional unit –no state –RFUOPs (like expfu) –stall processor on instruction miss, reload Add –manage multiple instructions loaded –more than 2 inputs possible

45 EECC722 - Shaaban #45 lec # 7 Fall 2000 10-2-2000 Chimaera Architecture “Live” copy of register file values feed into array Each row of array may compute from register values or intermediates (other rows) Tag on array to indicate RFUOP

46 EECC722 - Shaaban #46 lec # 7 Fall 2000 10-2-2000 Chimaera Architecture Array can compute on values as soon as placed in register file Logic is combinational When RFUOP matches –stall until result ready critical path –only from late inputs –drive result from matching row

47 EECC722 - Shaaban #47 lec # 7 Fall 2000 10-2-2000 GARP (Berkeley) Integrate as coprocessor –similar bwidth to processor as FU –own access to memory Support multi-cycle operation –allow state –cycle counter to track operation Fast operation selection –cache for configurations –dense encodings, wide path to memory

48 EECC722 - Shaaban #48 lec # 7 Fall 2000 10-2-2000 GARP ISA -- coprocessor operations –issue gaconfig to make a particular configuration resident ( may be active or cached ) –explicitly move data to/from array 2 writes, 1 read (like FU, but not 2W+1R) –processor suspend during coproc operation cycle count tracks operation –array may directly access memory processor and array share memory space –cache/mmu keeps consistent between can exploit streaming data operations

49 EECC722 - Shaaban #49 lec # 7 Fall 2000 10-2-2000 GARP Processor Instructions

50 EECC722 - Shaaban #50 lec # 7 Fall 2000 10-2-2000 GARP Array Row oriented logic –denser for datapath operations Dedicated path for –processor/memory data Processor not have to be involved in array  memory path

51 EECC722 - Shaaban #51 lec # 7 Fall 2000 10-2-2000 GARP Results General results –10-20x on stream, feed- forward operation –2-3x when data- dependencies limit pipelining

52 EECC722 - Shaaban #52 lec # 7 Fall 2000 10-2-2000 PRISC/Chimera vs. GARP PRISC/Chimaera –basic op is single cycle: expfu ( rfuop ) –no state –could conceivably have multiple PFUs? –Discover parallelism => run in parallel? –Can’t run deep pipelines GARP –basic op is multicycle gaconfig mtga mfga –can have state/deep pipelining – Multiple arrays viable? –Identify mtga/mfga w/ corr gaconfig ?

53 EECC722 - Shaaban #53 lec # 7 Fall 2000 10-2-2000 Common Instruction Augmentation Features To get around instruction expression limits –define new instruction in array many bits of config … broad expressability many parallel operators –give array configuration short “name” which processor can callout …effectively the address of the operation

54 EECC722 - Shaaban #54 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models: VLIW/microcoded Model Similar to instruction augmentation Single tag (address, instruction) –controls a number of more basic operations Some difference in expectation –can sequence a number of different tags/operations together

55 EECC722 - Shaaban #55 lec # 7 Fall 2000 10-2-2000 REMARC (Stanford) Array of “nano-processors” –16b, 32 instructions each –VLIW like execution, global sequencer Coprocessor interface (similar to GARP) –no direct array  memory

56 EECC722 - Shaaban #56 lec # 7 Fall 2000 10-2-2000 REMARC Architecture Issue coprocessor rex –global controller sequences nanoprocessors –multiple cycles (microcode) Each nanoprocessor has own I-store (VLIW)

57 EECC722 - Shaaban #57 lec # 7 Fall 2000 10-2-2000 REMARC Results MPEG2 DES

58 EECC722 - Shaaban #58 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models: Configurable Vector Unit Model Perform vector operation on datastreams Setup spatial datapath to implement operator in configurable hardware Potential benefit in ability to chain together operations in datapath May be way to use GARP/NAPA? OneChip.

59 EECC722 - Shaaban #59 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models: Observation All single threaded –limited to parallelism instruction level (VLIW, bit-level) data level (vector/stream/SIMD) –no task/thread level parallelism except for IO dedicated task parallel with processor task

60 EECC722 - Shaaban #60 lec # 7 Fall 2000 10-2-2000 Hybrid-Architecture RC Compute Models: Autonomous Coroutine Array task is decoupled from processor –fork operation / join upon completion Array has own –internal state –access to shared state (memory) NAPA supports to some extent –task level, at least, with multiple devices

61 EECC722 - Shaaban #61 lec # 7 Fall 2000 10-2-2000 OneChip (Toronto, 1998) Want array to have direct memory  memory operations Want to fit into programming model/ISA –w/out forcing exclusive processor/FPGA operation –allowing decoupled processor/array execution Key Idea: –FPGA operates on memory  memory regions –make regions explicit to processor issue –scoreboard memory blocks

62 EECC722 - Shaaban #62 lec # 7 Fall 2000 10-2-2000 OneChip Pipeline

63 EECC722 - Shaaban #63 lec # 7 Fall 2000 10-2-2000 OneChip Coherency

64 EECC722 - Shaaban #64 lec # 7 Fall 2000 10-2-2000 OneChip Instructions Basic Operation is: –FPGA MEM[Rsource]  MEM[Rdst] block sizes powers of 2 Supports 14 “loaded” functions –DPGA/contexts so 4 can be cached

65 EECC722 - Shaaban #65 lec # 7 Fall 2000 10-2-2000 OneChip Basic op is: FPGA MEM  MEM no state between these ops coherence is that ops appear sequential could have multiple/parallel FPGA Compute units –scoreboard with processor and each other single source operations? can’t chain FPGA operations?

66 EECC722 - Shaaban #66 lec # 7 Fall 2000 10-2-2000 To Date... In context of full application –seen fine-grained/automatic benefits On computational kernels –seen the benefits of coarse-grain interaction GARP, REMARC, OneChip Missinge: still need to see –full application (multi-application) benefits of these broader architectures...

67 EECC722 - Shaaban #67 lec # 7 Fall 2000 10-2-2000 Summary Several different models and uses for a “Reconfigurable Processor” Some drive us into different design spaces Exploit density and expressiveness of fine-grained, spatial operations Number of ways to integrate cleanly into processor architecture…and their limitations


Download ppt "EECC722 - Shaaban #1 lec # 7 Fall 2000 10-2-2000 What is Configurable Computing? Spatially-programmed connection of processing elements Spatially-programmed."

Similar presentations


Ads by Google