Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Computer Architectures

Similar presentations


Presentation on theme: "Advanced Computer Architectures"— Presentation transcript:

1 Advanced Computer Architectures
Unit-6

2 Contents: Reduced Instruction Set Computers
Complex Instruction Set Computers Super Scalars Vector Processing Parallel Cluster Computers Distributed Computers.

3 1. RISC Reduced Instruction Set Computer Key features
Large number of general purpose registers or use of compiler technology to optimize register use Limited and simple instruction set Emphasis on optimising the instruction pipeline Driving force for CISC Software costs far exceed hardware costs Increasingly complex high level languages Leads to: Large instruction sets More addressing modes Hardware implementations of HLL statements

4 Execution Characteristics
Operations performed Operands used Execution sequencing Operations Assignments Movement of data Conditional statements (IF, LOOP) Sequence control Procedure call-return is very time consuming Some HLL instruction lead to many machine code operations

5 Procedure Calls Implications Very time consuming
Depends on number of parameters passed Depends on level of nesting Most programs do not do a lot of calls followed by lots of returns Most variables are local (c.f. locality of reference) Implications Best support is given by optimising most used and most time consuming features Large number of registers Operand referencing Careful design of pipelines Branch prediction etc. Simplified (reduced) instruction set

6 Large Register File Software solution Hardware solution
Require compiler to allocate registers Allocate based on most used variables in a given time Requires sophisticated program analysis Hardware solution Have more registers Thus more variables will be in registers Registers for Local Variables Store local scalar variables in registers Reduces memory access Every procedure (function) call changes locality Parameters must be passed Results must be returned Variables from calling programs must be restored

7 Register Windows Register Windows cont. Only few parameters
Limited range of depth of call Use multiple small sets of registers Calls switch to a different set of registers Returns switch back to a previously used set of registers Register Windows cont. Three areas within a register set Parameter registers Local registers Temporary registers Temporary registers from one set overlap parameter registers from the next This allows parameter passing without moving data

8 Overlapping Register Windows
Circular Buffer diagram

9 Operation of Circular Buffer
When a call is made, a current window pointer is moved to show the currently active register window If all windows are in use, an interrupt is generated and the oldest window (the one furthest back in the call nesting) is saved to memory A saved window pointer indicates where the next saved windows should restore to Global Variables Allocated by the compiler to memory Inefficient for frequently accessed variables Have a set of registers for global variables

10 Registers v Cache Large Register File Cache
All local scalars Recently used local scalars Individual variables Blocks of memory Compiler assigned global variables Recently used global variables Save/restore based on procedure Save/restore based on nesting caching algorithm Register addressing Memory addressing

11 Referencing a Scalar - Cache Window Based Register File

12 RISC Characteristics One instruction per cycle
Register to register operations Few, simple addressing modes Few, simple instruction formats Hardwired design (no microcode) Fixed instruction format More compile time/effort

13 RISC Pipelining Most instructions are register to register
Two phases of execution I: Instruction fetch E: Execute ALU operation with register input and output For load and store Calculate memory address D: Memory Register to memory or memory to register operation

14 Effects of Pipelining

15 Optimization of Pipelining
Delayed branch Does not take effect until after execution of following instruction This following instruction is the delay slot

16 CISC and RISC (contd.,) CISC: Complex instruction set computers.
Complex instructions involve a large number of steps. If individual instructions perform more complex operations, fewer instructions will be needed, leading to a lower value of N and a larger value of S. Complex instructions combined with pipelining would achieve good performance.

17 Complex Instruction Set Computer
Another characteristic of CISC computers is that they have instructions that act directly on memory addresses For example, ADD L1, L2, L3 that takes the contents of M[L1] adds it to the contents of M[L2] and stores the result in location M[L3] An instruction like this takes three memory access cycles to execute That makes for a potentially very long instruction execution cycle The problems with CISC computers are The complexity of the design may slow down the processor. The complexity of the design may result in costly errors in the processor design and implementation. Many of the instructions and addressing modes are used rarely.

18 Overlapped Register Windows
Local to D R64 R64 R63 R63 Common to C and D R58 R58 Proc D R57 R57 Local to C R48 R48 R47 R47 Common to B and C R42 R42 R41 Proc C R41 Local to B R32 R32 R31 R31 Common to A and B R26 R26 R25 Proc B R25 Local to A R16 R16 R15 R15 Common to A and D R10 R10 R9 R9 Proc A Common to all procedures R0 R0 Global registers

19 Characteristics Of RISC
RISC Characteristics - Relatively few instructions - Relatively few addressing modes - Memory access limited to load and store instructions - All operations done within the registers of the CPU - Fixed-length, easily decoded instruction format - Single-cycle instruction format - Hardwired rather than microprogrammed control Advantages of RISC - VLSI Realization - Computing Speed - Design Costs and Reliability - High Level Language Support

20 3. Super Scalar operation
A higher degree of concurrency can be achieved if multiple instruction pipelines are implemented in the processor. This means that multiple function units are used, creating parallel paths through which different instructions can be executed in parallel. With such an arrangement, it becomes possible to start the execution of several instructions in every clock cycle. This mode of execution is called Super scalar operation.

21 General Superscalar Organization
Superpipelined Many pipeline stages need less than half a clock cycle Double internal clock speed gets two tasks per external clock cycle Superscalar allows parallel fetch execute

22 Superscalar Vs Superpipeline

23 Limitations Instruction level parallelism Compiler based optimisation
Hardware techniques Limited by True data dependency (ADD r1, r2 (r1 := r1+r2;) MOVE r3,r1 (r3 := r1;) Can fetch and decode second instruction in parallel with first Can NOT execute second instruction until first is finished) Procedural dependency (Can not execute instructions after a branch in parallel with instructions before a branch Also, if instruction length is not fixed, instructions have to be decoded to find out how many fetches are needed This prevents simultaneous fetches) Resource conflicts (Two or more instructions requiring access to the same resource at the same time e.g. two arithmetic instructions Can duplicate resources e.g. have two arithmetic units Output dependency Antidependency

24 Effect of Dependencies

25 4. Vector Computation Maths problems involving physical processes present different difficulties for computation Aerodynamics, seismology, meteorology Continuous field simulation High precision Repeated floating point calculations on large arrays of numbers Supercomputers handle these types of problem Hundreds of millions of flops $10-15 million Optimised for calculation rather than multitasking and I/O Limited market Research, government agencies, meteorology Array processor Alternative to supercomputer Configured as peripherals to mainframe & mini Just run vector portion of problems

26 Approaches to Vector Computation

27 5. Parallel Cluster Multiple Processor Organization
Single instruction, single data stream - SISD Single instruction, multiple data stream - SIMD Multiple instruction, single data stream - MISD Multiple instruction, multiple data stream- MIMD

28 Taxonomy of Parallel Processor Architectures

29 Loosely Coupled - Clusters
Collection of independent uniprocessors or SMPs Interconnected to form a cluster Communication via fixed path or network connections Parallel Organizations - SISD

30 Parallel Organizations - SIMD
Parallel Organizations - MIMD Shared Memory Parallel Organizations - SIMD

31 Parallel Organizations - MIMD Distributed Memory
Block Diagram of Tightly Coupled Multiprocessor

32 Organization Classification
Time shared or common bus Multiport memory Central control unit Symmetric Multiprocessor Organization

33 Clusters Alternative to SMP High performance High availability
Server applications A group of interconnected whole computers working together as unified resource Illusion of being one machine Each computer called a node Cluster Benefits Absolute scalability Incremental scalability Superior price/performance

34 Cluster Configurations - Standby Server, No Shared Disk
Cluster Configurations - Shared Disk

35 Cluster Computer Architecture

36 6. Distributed Computers

37 The End


Download ppt "Advanced Computer Architectures"

Similar presentations


Ads by Google