Computer Architecture

Slides:



Advertisements
Similar presentations
Microprocessors A Beginning.
Advertisements

ΜP rocessor Architectures To : Eng. Ahmad Hassan By: Group 18.
© 2008 Wayne Wolf Overheads for Computers as Components 2nd ed. Instruction sets Computer architecture taxonomy. Assembly language. 1.
GCSE Computing - The CPU
ARM Processor Architecture
Computer Structure.
Computer Systems Organization CS 1428 Foundations of Computer Science.
What is cache memory?. Cache Cache is faster type of memory than is found in main memory. In other words, it takes less time to access something in cache.
Input-Output Organization
Computer Architecture Memory, Math and Logic. Basic Building Blocks Seen: – Memory – Logic & Math.
Computer Architecture 2 nd year (computer and Information Sc.)
Stored Programs In today’s lesson, we will look at: what we mean by a stored program computer how computers store and run programs what we mean by the.
© GCSE Computing Candidates should be able to:  describe the characteristics of an assembler Slide 1.
1.4 Representation of data in computer systems Instructions.
What’s going on here? Can you think of a generic way to describe both of these?
Riyadh Philanthropic Society For Science Prince Sultan College For Woman Dept. of Computer & Information Sciences CS 251 Introduction to Computer Organization.
Stored Program Concept Learning Objectives Learn the meaning of the stored program concept The processor and its components The fetch-decode-execute and.
CPU Lesson 2.
OCR GCSE Computer Science Teaching and Learning Resources
GCSE Computing - The CPU
Computers’ Basic Organization
SHRI S’AD VIDYA MANDAL INSTITUTE OF TECHNOLOGY
The CPU, RISC and CISC Component 1.
Nios II Processor: Memory Organization and Access
Microprocessor and Microcontroller Fundamentals
Computer Organization and Architecture + Networks
Everybody.
Chapter 10: Computer systems (1)
Chapter 2.1 CPU.
Computer Organization
Computing Systems Organization
Computer architecture and computer organization
Stored program concept
Chapter 7.2 Computer Architecture
3.3.3 Computer architectures
Modified Harvard Architectures
Lesson Objectives A note about notes: Aims
Cache Memory Presentation I
Computer Architecture and Organization
Computer Architecture
Instructions at the Lowest Level
Teaching Computing to GCSE
System Architecture 1 Chapter 2.
CISC AND RISC SYSTEM Based on instruction set, we broadly classify Computer/microprocessor/microcontroller into CISC and RISC. CISC SYSTEM: COMPLEX INSTRUCTION.
Intro to Architecture & Organization
EE 445S Real-Time Digital Signal Processing Lab Spring 2014
Overview of Computer Architecture and Organization
CPU Key Revision Points.
Branch instructions We’ll implement branch instructions for the eight different conditions shown here. Bits 11-9 of the opcode field will indicate the.
MARIE: An Introduction to a Simple Computer
Overheads for Computers as Components 2nd ed.
Chapter 4 Introduction to Computer Organization
Control units In the last lecture, we introduced the basic structure of a control unit, and translated our assembly instructions into a binary representation.
ECE 352 Digital System Fundamentals
1-2 – Central Processing Unit
Introduction to Computer Systems
GCSE OCR 1 The CPU Computer Science J276 Unit 1
Chapter 5 Computer Organization
Computer Architecture
CPU Structure CPU must:
How does the CPU work? CPU’s program counter (PC) register has address i of the first instruction Control circuits “fetch” the contents of the location.
Computer Evolution and Performance
GCSE Computing - The CPU
WJEC GCSE Computer Science
Objectives Describe common CPU components and their function: ALU Arithmetic Logic Unit), CU (Control Unit), Cache Explain the function of the CPU as.
Course Code 114 Introduction to Computer Science
Lesson Objectives A note about notes: Aims
COMPUTER ARCHITECTURE
Dr. Clincy Professor of CS
Presentation transcript:

Computer Architecture By- Ms. Sukanya Roy Assistant Professor CSE Dept , UEMK

Harvard architecture Harvard architecture has separate data and instruction busses, allowing transfers to be performed simultaneously on both busses. A von Neumann architecture has only one bus which is used for both data transfers and instruction fetches, and therefore data transfers and instruction fetches must be scheduled - they can not be performed at the same time. It is possible to have two separate memory systems for a Harvard architecture. As long as data and instructions can be fed in at the same time, then it doesn't matter whether it comes from a cache or memory. But there are problems with this. Compilers generally embed data (literal pools) within the code, and it is often also necessary to be able to write to the instruction memory space, for example in the case of self modifying code, or, if an ARM debugger is used, to set software breakpoints in memory. If there are two completely separate, isolated memory systems, this is not possible. There must be some kind of bridge between the memory systems to allow this. Using a simple, unified memory system together with a Harvard architecture is highly inefficient. Unless it is possible to feed data into both busses at the same time, it might be better to use a von Neumann architecture processor. Use of caches At higher clock speeds, caches are useful as the memory speed is proportionally slower. Harvard architectures tend to be targeted at higher performance systems, and so caches are nearly always used in such systems.

Von Neumann architectures Von Neumann architectures usually have a single unified cache, which stores both instructions and data. The proportion of each in the cache is variable, which may be a good thing. It would in principle be possible to have separate instruction and data caches, storing data and instructions separately. This probably would not be very useful as it would only be possible to ever access one cache at a time. Caches for Harvard architectures are very useful. Such a system would have separate caches for each bus. Trying to use a shared cache on a Harvard architecture would be very inefficient since then only one bus can be fed at a time. Having two caches means it is possible to feed both buses simultaneously....exactly what is necessary for a Harvard architecture. This also allows to have a very simple unified memory system, using the same address space for both instructions and data. This gets around the problem of literal pools and self modifying code. What it does mean, however, is that when starting with empty caches, it is necessary to fetch instructions and data from the single memory system, at the same time. Obviously, two memory accesses are needed therefore before the core has all the data needed. This performance will be no better than a von Neumann architecture. However, as the caches fill up, it is much more likely that the instruction or data value has already been cached, and so only one of the two has to be fetched from memory. The other can be supplied directly from the cache with no additional delay. The best performance is achieved when both instructions and data are supplied by the caches, with no need to access external memory at all. This is the most sensible compromise and the architecture used by ARMs Harvard processor cores. Two separate memory systems can perform better, but would be difficult to implement.

Von Neumann architectures

IAS Architecture 1000 x 40 bit words — Binary number — 2 x 20 bit instructions • Set of registers (storage in CPU) — Memory Buffer Register — Memory Address Register — Instruction Register — Instruction Buffer Register — Program Counter — Accumulator — Multiplier Quotient