Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS Chapter 3 (3A and ) – Part 5 of 5

Similar presentations


Presentation on theme: "CS Chapter 3 (3A and ) – Part 5 of 5"— Presentation transcript:

1 CS 6020 - Chapter 3 (3A and 10.2.2) – Part 5 of 5
Dr. Clincy Professor of CS Part 1 - Finish Ch 3 in class Part 1 - Exam Review Part 2 - Start Ch 4 online Dr. Clincy Lecture Slide 1 1

2 T Flip Flop T Flip Flops are good for counters – changes its state every clock cycle, if the input, T, is 1 Positive-edge triggered flip flop Since the previous state of Q was 0, it complements it to 1 Dr. Clincy Lecture

3 JK Flip Flop Combines the behavior of the SR and T flip flops
First three entries are the same behavior as the SR Latch (when CLK=1) Usually the state S=R=1 undefined – for the JK Flip Flop, for J=K=1, next state is the complement of the present state Can store data like a D Flip Flop or can tie J & K inputs together and use to build counters (like a T flip flop) Dr. Clincy Lecture

4 Registers and Shift Registers
A Flip Flop can store ONE bit – in being able to handle a WORD, you will need a number of flip flops (32, 64, etc) arranged in a common structure called a REGISTER. All flip flops are synchronized by a common clock Data written into (loaded) flip flops at the same time Data is read from all flip flops at the same time F F F F 1 2 3 4 In D Q D Q D Q D Q Out Clock Q Q Q Q A simple shift register. Want the ability to rotate and shift the data Clock pulse will cause the contents of F1, F2, F3 and F4 to shift right (serially) To do a rotation, simply connect OUT to IN Dr. Clincy Lecture

5 Registers and Shift Registers
Can load either serially or in parallel When clock pulse occurs, Serial shift takes place if Shift’/Load=0 or if Shift’/Load=1, parallel load is performed Dr. Clincy Lecture

6 Counters Hmmm Called a Ripple Counter
3-stage or 3-bit counter constructed using T Flip Flops With T Flip Flips, when input T=1, the flip flop toggles – changes state for each successive clock pulse Initially all set to 0 When clock pulse, Q0=1, therefore Q’=0 disabling Q1 and Q1 disables Q2 (have 1,0,0) For the 2nd clock pulse, Q0=0, therefore Q’=1, causing Q1=1 and therefore Q’=0 disabling Q2 (have 0,1,0) For the 3rd clock pulse, Q0=1, therefore Q’=0 disabling Q2 and therefore disabling Q3 (have 1,1,0) Etc…. LSB 000 001 010 011 100 101 110 111 Hmmm Dr. Clincy Lecture Called a Ripple Counter

7 Combinatorial or Combinational Logic
Recall Circuit New Input Current State or Output Combinatorial or Combinational Logic Current State or output of the device is only affected by the current inputs Examples: Decoders Multiplexers Current State or output of the device is affected by the previous states Circuit Flip Flops New Input Previous State or Output Current State or Output Sequential Logic Examples: Shift Registers Counters Dr. Clincy Lecture

8 Sequential Circuit – State Diagram
If x=0, count up, If x=1, count down Interested when 2 is realized – z=1 when reach 2, else z=0 If at 0 and x=0, count up to 1 (and z=0) If at 0 and x=1, count down to 3 (and z=0) x = z 1 S2 S3 State diagram of a mod-4 up/down counter that detects the count of 2. S1 S0 State diagram describes the functional behavior without any reference to implementation Dr. Clincy Lecture

9 Sequential Circuit – State Table
Can represent the info in the state diagram in a state table x = z 1 S2 S3 State diagram of a mod-4 up/down counter that detects the count of 2. S1 S0 Dr. Clincy Lecture

10 Sequential Circuit – Equation
Inputs – y2,y1,x Outputs –Y2, Y1 Dr. Clincy Lecture

11 Sequential Circuit – Circuit Design
D Flip Flops used to store values of the two state variables between clock pulses Output from Flip Flops is the present-state of the variables Input, D, of the Flip Flops is the next-state of the variables Dr. Clincy Lecture

12 Finite State Machine Model
The example we just implemented is an example of a “Finite State Machine” - is a model or abstraction of behavior composed of a finite number of states, transitions between those states, and actions Dr. Clincy Lecture

13 Dr. Clincy Professor of CS
Lecture 7 Part 2 Online CS Start Chapter 4 Dr. Clincy Professor of CS Review Exam 1 in Class Now Dr. Clincy Lecture Slide 13 13

14 CS6020 Exam 1 Results Grading Scaled Used:
Average Score = 64 (Average Grade = 85) Score SD = 22 (very large) Grading Scaled Used: A-grade (6 students) B-grade (7 students) C-grade (5 students) D-grade (1 student) In getting your grade logged, be sure and pass back the exam after we go over them Dr. Clincy 14

15 Chapter 4 Objectives Learn the components common to every modern computer system. Be able to explain how each component contributes to program execution. Understand a simple architecture invented to illuminate these basic concepts, and how it relates to some real architectures. Know how the program assembly process works. Dr. Clincy Lecture

16 Introduction In Chapter 2, we discussed how binary-coded data is stored and manipulated by various computer system components. In Chapter 3, we described how fundamental components are designed and built from digital circuits. Also from Chapter 3, we know that memory is used to store both data and program instructions in binary Having this background, we can now understand how computer components are fundamentally built The next question is, how do the various components fit together to create useful computer systems. Dr. Clincy Lecture

17 Basic Structure of Computers
Coded info is stored in memory for later use Program is stored in memory and determines the processing steps Input unit accepts code info from human operators, electromechanical devices (ie keyboard), other computers via networks ALU uses the coded info to perform the desired operations All actions are coordinated by the control unit The output unit sends the results back out externally Collectively called the I/O unit Collectively called the processor Dr. Clincy Lecture

18 CPU Basics The next question is, how is the program EXECUTED and how is the data PROCESSED properly ? The computer’s CPU or Processor Fetches the program instructions, Decodes each instruction that is fetched , and Perform the indicated sequence of operations on the data (execute) The two principal parts of the CPU are the Datapath and the Control unit. Datapath - consists of an arithmetic-logic unit (ALU) and network of storage units (registers) that are interconnected by a data bus that is also connected to main memory. Control Unit - responsible for sequencing the operations and making sure the correct data is in the correct place at the correct time. Dr. Clincy Lecture

19 CPU Basics Registers hold data that can be readily accessed by the CPU – data like addresses, program counter, data, and control info Registers can be implemented using D flip-flops. A 32-bit register requires 32 D flip-flops. There are many different registers – to store values, to shift values, to compare values, registers that count, registers that temporary store values, index registers to control program looping, stack pointer registers to manage stacks of info for processes, status or flag registers to hold status or mode of operation, and general purpose registers Dr. Clincy Lecture

20 CPU Basics The arithmetic-logic unit (ALU) carries out
logical operations (ie. comparisons) and arithmetic operations (ie. adding or multiplying) The ALU knows which operations to perform because it is controlled by signals from the control unit. The control unit determines which actions to carry out according to the values in a program counter register and a status register. The control unit tells the ALU which registers to use and turns on the correct circuitry in the ALU for execution of the operation. The control unit uses a program counter register to find the next instruction for execution and uses a status register to keep track of overflows, carries, and borrows. Dr. Clincy Lecture

21 The Bus The CPU shares data with other system components by way of a data bus. A bus is a set of wires that simultaneously convey a single bit along each line. One or more devices can share the bus. The “sharing” often results in communication bottlenecks The speed of the bus is effect by its length and the number of devices sharing it Dr. Clincy Lecture

22 The Bus Two types of buses are commonly found in computer systems: point-to-point, and multipoint buses. Point-to-point bus connects two specific devices Multipoint buses connects a number of devices. Because of the sharing, a bus protocol is used. Dr. Clincy Lecture

23 The Bus Buses consist of data lines, control lines, and address lines. Address lines determine the location of the source or destination of the data. Data lines convey bits from one device to another. Moves the actual information that must be moved from one location to another. Control lines determine the direction of data flow, and when each device can access the bus. When sharing the bus, concurrent bus requests must be arbitrated. Four categories of bus arbitration are: Daisy chain: Permissions are passed from the highest-priority device to the lowest. Centralized parallel: Each device is directly connected to an arbitration circuit. Distributed using self-detection: Devices decide which gets the bus among themselves. Distributed using collision-detection: Any device can try to use the bus. If its data collides with the data of another device, it tries again. Dr. Clincy Lecture

24 Types of Buses Processor-memory bus – short high speed bus used to transfer to and from memory I/O buses – longer buses that interface with many I/O devices other than the processor Backplane bus (or system bus) – connects the processor, I/O devices and memory. Expansion bus – connect external devices Local bus – a data bus that connect a peripheral device directly to the CPU Buses from a timing perspective: Synchronous buses - work off clock ticks – all devices using this bus type are synchronized by the clock rate Asynchronous buses – control lines coordinate the operations and a “handshaking protocol” is used for the timing. These types of buses can scale better and work with more devices Dr. Clincy Lecture

25 Clocks Every computer contains at least one clock that:
Regulates how quickly instructions can be executed Synchronizes the activities of its components. A fixed number of clock cycles are required to carry out each data movement or computational operation. As a result, instruction performance is measured in clock cycles. The clock frequency, measured in megahertz or gigahertz, determines the speed with which all operations are carried out. Clock cycle time is the reciprocal (or inverse) of its clock frequency. An 800 MHz clock has a cycle time of 1.25 ns. Clock speed should not be confused with CPU performance. The CPU time required to run a program is given by the general performance equation: We see that we can improve CPU throughput when we reduce the number of instructions in a program, reduce the number of cycles per instruction, or reduce the number of nanoseconds per clock cycle. Dr. Clincy Lecture

26 The Input/Output Subsystem
A computer communicates with the outside world through its input/output (I/O) subsystem. Input device examples: keyboard, mouse, card readers, scanners, voice recognition systems, touch screens Output device examples: monitors, printers, plotters, speakers, headphones I/O devices connect to the CPU through various interfaces. I/O can be memory-mapped-- where the I/O device behaves like main memory from the CPU’s point of view. Or I/O can be instruction-based, where the CPU has a specialized I/O instruction set. Dr. Clincy Lecture

27 Memory Organization We discussed a simple example of how memory is configured in Ch 3 – we now will cover more detail of: How memory is laid out How memory is addressed Envision memory as a matrix of bits – each row implemented as a register or “storage cell” – and each row being the size of a addressable Word. Each register or storage cell (typically called memory location) has a unique address. The memory addresses typically start at zero and progress upward Dr. Clincy Lecture

28 Memory Organization Computer memory consists of a linear array of addressable storage cells that are similar to registers. Memory can be byte-addressable, or word-addressable, where a word typically consists of two or more bytes. Byte-addressable case: although the Word could be multiple bytes, each individual byte would have an address – with the lowest address being the “address” of the Word Memory is constructed of RAM chips, often referred to in terms of length  width. If the memory word size of the machine is 16 bits, then a 4M  16 RAM chip gives us 4 megabytes of 16-bit memory locations. Dr. Clincy Lecture

29 Memory Organization For alignment reasons, in reading 16-bit words on a byte-addressable machine, the address should be a multiple of 2 (i.e 2 bytes) For alignment reasons, in reading 32-bit words on a byte-addressable machine, the address should be a multiple of 4 (i.e 4 bytes) For alignment reasons, in reading 64-bit words on a byte-addressable machine, the address should be a multiple of 4 (i.e 8 bytes). Dr. Clincy Lecture

30 Memory Organization How does the computer access a memory location corresponds to a particular address? Memory is referred to using notation: Length x Width (L x W) We observe that 4M can be expressed as 2 2  2 20 = 2 22 words – means 4M long with each item 8 bits wide. Provided this is byte-addressable, the memory locations will be numbered 0 through Thus, the memory bus of this system requires at least 22 address lines. Dr. Clincy Lecture 30 30

31 Memory Organization Physical memory usually consists of more than one RAM chip. A single memory module causes all accesses to memory to be sequential - only one memory access can be performed at a time By splitting or spreading memory across multiple memory modules (or banks), access can be performed in parallel – this is called Memory interleaving With low-order interleaving, the low order bits of the address specify which memory bank contains the address of interest. In high-order interleaving, the high order address bits specify the memory bank. Dr. Clincy Lecture

32 Memory Organization Example: Suppose we have a memory consisting of 16 2K x 8 bit chips. Memory is 32K = 25  210 = 215 15 bits are needed for each address. We need 4 bits to select the chip, and 11 bits for the offset into the chip that selects the byte. Dr. Clincy Lecture

33 Memory Organization In high-order interleaving the high-order 4 bits select the chip. In low-order interleaving the low-order 4 bits select the chip. Dr. Clincy Lecture


Download ppt "CS Chapter 3 (3A and ) – Part 5 of 5"

Similar presentations


Ads by Google