Download presentation
Presentation is loading. Please wait.
Published byVerity Lloyd Modified over 8 years ago
1
Invitation to Computer Science 6th Edition Chapter 5 Computer Systems Organization
2
Invitation to Computer Science, 6th Edition2 Objectives In this chapter, you will learn about: The components of a computer system The Von Neumann architecture Non-Von Neumann architectures
3
Introduction Computer organization –Branch of computer science that studies computers in terms of their major functional units and how they work Concept of abstraction –Used throughout computer science –Without it, it would be virtually impossible to study computer design or any other large, complex system Invitation to Computer Science, 6th Edition33
4
4 Figure 5.1 The Concept of Abstraction
5
Invitation to Computer Science, 6th Edition55 The Components of a Computer System Von Neumann architecture is based on the following three characteristics –Four major subsystems called memory, input/output, the arithmetic/logic unit (ALU), and the control unit –The stored program concept –The sequential execution of instructions
6
Invitation to Computer Science, 6th Edition6 Figure 5.2 Components of the Von Neumann Architecture
7
Invitation to Computer Science, 6th Edition7 Memory and Cache Memory –Functional unit of a computer that stores and retrieves the instructions and the data being executed Random access (RAM) –Access technique used by computer memory Read-only memory (ROM) –Information is prerecorded during manufacture
8
Invitation to Computer Science, 6th Edition8 Memory and Cache (continued) With a cell size of 8 bits: –The largest unsigned integer value that can be stored in a single cell is 11111111 (255) [0.. (2 N – 1)] –Range of addresses available on a computer When dealing with memory: –Distinguish between an address and the contents of that address
9
Invitation to Computer Science, 6th Edition9 Figure 5.3 Structure of Random Access Memory
10
Invitation to Computer Science, 6th Edition10 Figure 5.4 Maximum Memory Sizes
11
Invitation to Computer Science, 6th Edition11 Memory and Cache (continued) Basic memory operations –Fetching and storing Memory access time –Typically about 5 to 10 nsec Memory registers –Used to implement the fetch and store operations Memory Address Register (MAR) –Holds the address of the cell to be fetched or stored
12
Invitation to Computer Science, 6th Edition12 Memory and Cache (continued) Memory Data Register (MDR) –Contains data value being fetched or stored Two-dimensional structure –Memory organization Memory locations –Stored in row major order Selection lines –Row selection line –Column selection line
13
Invitation to Computer Science, 6th Edition13 Figure 5.5 Organization of Memory and the Decoding Logic
14
Invitation to Computer Science, 6th Edition14 Figure 5.6 Two-Dimensional Memory Address Organization
15
Invitation to Computer Science, 6th Edition15 Memory and Cache (continued) Fetch/store controller –Determines whether we put contents of a memory cell into the MDR or put the contents of the MDR into a memory cell Cache memory –Principle of Locality: when the computer uses something, it will probably use it again very soon, and it will probably use the “neighbors” of this item very soon
16
Invitation to Computer Science, 6th Edition16 Figure 5.7 Overall RAM Organization
17
Invitation to Computer Science, 6th Edition17 Input/Output and Mass Storage Input/output (I/O) units –Devices that allow a computer system to communicate and interact with the outside world as well as store information Volatile memory –Information disappears when power is turned off Nonvolatile storage –Role of mass storage devices such as disks and tapes
18
Invitation to Computer Science, 6th Edition18 Input/Output and Mass Storage (continued) Input/output devices come in two basic types –Those that represent information in human-readable form for human consumption –Those that store information in machine-readable form for access by a computer system Disk –Stores information in units called sectors Fixed number of sectors are placed in a concentric circle on the surface of the disk, called a track
19
Invitation to Computer Science, 6th Edition19 Figure 5.8 Overall Organization of a Typical Disk
20
Invitation to Computer Science, 6th Edition20 Input/Output and Mass Storage (continued) Seek time –Time needed to position the read/write head over the correct track Latency –Time for the beginning of the desired sector to rotate under the read/write head Transfer time –Time for the entire sector to pass under the read/write head and have its contents read into or written from memory
21
Invitation to Computer Science, 6th Edition21 Input/Output and Mass Storage (continued) Sequential access storage device (SASD) –Does not require that all units of data be identifiable via unique addresses Direct access storage devices –Much faster at accessing individual pieces of information I/O controller –Has small amount of memory (I/O buffer) –I/O control and logic: ability to handle mechanical functions of the I/O device
22
Invitation to Computer Science, 6th Edition22 Figure 5.9 Organization of an I/O Controller
23
Invitation to Computer Science, 6th Edition23 The Arithmetic/Logic Unit Subsystem that performs addition, subtraction, and comparison for equality Components –Registers, interconnections between components, and the ALU circuitry Register –Storage cell that holds the operands of an arithmetic operation and holds its result Bus –Path for electrical signals
24
Invitation to Computer Science, 6th Edition24 Figure 5.10 Three-Register ALU Organization
25
Invitation to Computer Science, 6th Edition25 Figure 5.11 Multiregister ALU Organization
26
Invitation to Computer Science, 6th Edition26 Figure 5.12 Using a Multiplexor Circuit to Select the Proper ALU Result
27
Invitation to Computer Science, 6th Edition27 Figure 5.13 Overall ALU Organization
28
Invitation to Computer Science, 6th Edition28 The Control Unit Stored program –Sequence of machine language instructions stored as binary values in memory Control unit –Tasks: fetch, decode, and execute
29
Invitation to Computer Science, 6th Edition29 Machine Language Instructions Instructions that can be decoded and executed by the control unit of a computer Operation code field –Unique unsigned integer code assigned to each machine language operation recognized by the hardware Address field(s) –Memory addresses of values on which the operation will work
30
Invitation to Computer Science, 6th Edition30 Figure 5.14 Typical Machine Language Instruction Format
31
Invitation to Computer Science, 6th Edition31 Machine Language Instructions (continued) Instruction set –Set of all operations that can be executed by a processor Reduced instruction set computers or RISC machines –Include as little as 30–50 instructions Complex instruction set computers (CISC machines) –Include 300–500 very powerful instructions
32
Invitation to Computer Science, 6th Edition32 Machine Language Instructions (continued) Classes of machine language instructions –Data transfer –Arithmetic –Compare –Branch
33
Invitation to Computer Science, 6th Edition33 Machine Language Instructions (continued) Data transfer operations –Move values to and from memory and registers –Instruction Examples: Load X: Load register R with the contents of memory cell X STORE X: Store the contents of register R into memory cell X MOVE X Y: Copy the contents of memory cell X into memory cell Y
34
Invitation to Computer Science, 6th Edition34 Machine Language Instructions (continued) Arithmetic/logic operations –Move values to and from memory and registers –Instruction Examples: ADD X, Y, Z (Three – address instruction) CON(Z) = CON(X) + CON(Y) ADD X, Y (Two – address instruction) CON(Y) = CON(X) + CON(Y) ADD X (One – address instruction) R = CON(X) + R
35
Invitation to Computer Science, 6th Edition35 Machine Language Instructions (continued) Compare operations – Compare two values and set an indicator on the basis of the results of the compare; set status register (or we call condition register, special register) bits –Instruction Examples: COMPARE X, Y CON(X) > CON(Y) set GT = 1, EQ = 0, LT = 0 CON(X) = CON(Y) set GT = 0, EQ = 1, LT = 0 CON(X) < CON(Y) set GT = 0, EQ = 0, LT = 1
36
Invitation to Computer Science, 6th Edition36 Machine Language Instructions (continued) Branch operations – Jump to a new memory address to continue processing –Instruction Examples: JUMP X (unconditionally) JUMPGT X / JUMPEQ X / JUMPLT X JUMPGE X / JUMP LE X HALT
37
Invitation to Computer Science, 6th Edition37 Control Unit Registers and Circuits Program counter (PC) –Holds the address of the next instruction to be executed Instruction register (IR) –Holds a copy of the instruction fetched from memory Instruction decoder –Determines what instruction is in the IR
38
Invitation to Computer Science, 6th Edition38 Figure 5.15 Examples of Simple Machine Language Instruction Sequences
39
Invitation to Computer Science, 6th Edition39 Figure 5.16 Organization of the Control Unit Registers and Circuits
40
Invitation to Computer Science, 6th Edition40 Figure 5.17 The Instruction Decoder
41
Invitation to Computer Science, 6th Edition41 Putting All the Pieces Together–the Von Neumann Architecture Program execution phases –Fetch, decode, and execute Von Neumann cycle –The repetition of the fetch/decode/execute phase
42
Invitation to Computer Science, 6th Edition42 Figure 5.18 The Organization of a Von Neumann Computer
43
Invitation to Computer Science, 6th Edition43 Figure 5.19 Instruction Set for Our Von Neumann Machine
44
Invitation to Computer Science, 6th Edition44 Non-Von Neumann Architectures Problems that computers are being asked to solve –Have grown significantly in size and complexity Important limit on increased processor speed –Inability to place gates close together on a chip Slowing down –Rate of increase in performance of newer machines Von Neumann bottleneck –Inability of the sequential one-instruction-at-a-time Von Neumann model to handle today’s large-scale problems
45
Invitation to Computer Science, 6th Edition45 Figure 5.20 Graph of Processor Speeds, 1945 to the Present
46
Non-Von Neumann Architectures (continued) Parallel processing –Building computers not with one processor, but with tens, hundreds, or even thousands SIMD parallel processing –ALU is replicated many times –Each ALU has its own local memory where it may keep private data MIMD parallel processing –All processors are replicated –Every processor is capable of executing its own separate program Invitation to Computer Science, 6th Edition46
47
Invitation to Computer Science, 6th Edition47 Figure 5.21 A SIMD Parallel Processing System
48
Invitation to Computer Science, 6th Edition48 Non-Von Neumann Architectures (continued) Scalability –It is possible to match the number of processors to the size of the problem Massively parallel MIMD machines –Have achieved solutions to large problems thousands of times faster than is possible using a single processor Grid computing –Enables researchers to easily and transparently access computer facilities without regard for their location
49
Invitation to Computer Science, 6th Edition49 Summary of Level 2 Chapter 4 –Looked at the basic building blocks of computers: binary codes, transistors, gates, and circuits Chapter 5 –Examined the standard model for computer design, called the Von Neumann architecture System software –Intermediary between the user and the hardware components of the Von Neumann machine
50
Summary Computer organization –Examines different subsystems of a computer: memory, input/output, arithmetic/logic unit, and control unit Machine language –Gives codes for each primitive instruction the computer can perform and its arguments Von Neumann machine –Sequential execution of a stored program Parallel computers –Improve speed by doing multiple tasks at one time Invitation to Computer Science, 6th Edition50
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.