Presentation is loading. Please wait.

Presentation is loading. Please wait.

Course Code 114 Introduction to Computer Science

Similar presentations


Presentation on theme: "Course Code 114 Introduction to Computer Science"— Presentation transcript:

1 Course Code 114 Introduction to Computer Science
Lecture 6 Machine Cycle and Data Addressing Assoc. Prof. Hussam Elbehiery Egypt 2018

2

3

4 Data packets (of 8, 16, 32, 64 or more bits at a time) are constantly being moved back and forth between the CPU and all the other components (RAM, hard disk, etc.). These transfers are all done using busses.

5 There is not just one bus on a motherboard; there are several
There is not just one bus on a motherboard; there are several. But they are all connected, so that data can run from one to another. Also we can say that the bus system is subdivided into several branches. Some of the PC components work with enormous amounts of data, while others manage with much less. For example, the keyboard only sends very few bytes per second, whereas the working storage (RAM) can send and receive several gigabytes per second. So you can not attach RAM and the keyboard to the same bus.

6 CPU Structure

7 Bus types: (Data Bus - -Address Bus - Control Bus)
Two busses with different capacities (bandwidths) can be connected if we place a controller between them. Such a controller is often called a bridge, since it functions as a bridge between the two different traffic systems. The entire bus system starts close to the CPU, where the load (traffic) is greatest.

8 RAM is the component which has the very greatest data traffic, and is therefore connected directly to the CPU by a particularly powerful bus. It is called the front side bus (FSB) or (in older systems) the system bus.

9 The motherboard’s busses are regulated by a number of controllers.
Most of these controller functions are grouped together into a couple of large chips, which together comprise the chip set.

10 The north bridge and south bridge share the work of managing the data traffic on the motherboard.

11 Two ways to greater speed
There are two quite different directions in this work: More power and speed in the CPU, for example, from higher clock frequencies. Better exploitation of existing processor power.

12 1. Clock frequencies All CPU’s have a working speed, which is regulated by a tiny crystal. The crystal is constantly vibrating at a very large number of “beats” per second causing the CPU to perform one (or more) actions.

13 The number of clock ticks per second is measured in Hertz.
Since the CPU’s crystal vibrates millions of times each second, the clock speed is measured in millions of oscillations (megahertz or MHz). Modern CPU’s actually have clock speeds running into billions of ticks per second, so we have started having to use gigahertz (GHz).

14 2. Processor power (more transistors)

15 Instruction Execution
The flow of data within the PC during program execution and the saving of data

16 The instructions that can be recognized by a processor are referred to as an 'instruction set'.
The instruction set is a collection of pre-defined machine codes, which the CPU is designed to expect and be able to act upon when detected. Opcode Operand(s) The opcode is a short code which indicates what operation is expected to be performed. The operand, or operands, indicate where the data required for the operation can be found and how it can be accessed (the addressing mode). The length of a machine code can vary - common lengths vary from one to twelve bytes in size.

17 Mnemonic STA (short for STore Accumulator).
If we are using a 24-bit CPU. This means that the minimum length of the machine codes used here should be 24 binary bits, which in this instance are split as shown in the table below: Opcode 6 bits (18-23) - Allows for 64 unique opcodes (2^6) Operand(s) 18 bits (0-17) - 16 bits (0-15) for address values - 2 bits (16/17) for specifying addressing mode to be used Opcodes are also given mnemonics (short names) so that they can be easily referred to in code listings and similar documentation. Mnemonic STA (short for STore Accumulator).

18 Mnemonic Description MOV Moves a data value from one location to another ADD Adds to data values using the ALU, and returns the result to the accumulator STO Stores the contents of the accumulator in the specified location END Marks the end of the program in memory

19 Instruction Execution Cycle

20 Fetch Cycle Decode Cycle Execute Cycle
The fetch cycle takes the address required from memory, stores it in the instruction register, and moves the program counter on one so that it points to the next instruction. Decode Cycle Here, the control unit checks the instruction that is now stored within the instruction register. It determines which opcode and addressing mode have been used, and as such what actions need to be carried out in order to execute the instruction in question. Execute Cycle The actual actions which occur during the execute cycle of an instruction depend on both the instruction itself, and the addressing mode specified to be used to access the data that may be required. However, four main groups of actions do exist, which are discussed in full later on.

21 Addressing Modes Immediate addressing Direct addressing
With immediate addressing, no lookup of data is actually required. The data is located within the operands of the instruction itself, not in a separate memory location. This is the quickest of the addressing modes to execute, but the least flexible. As such it is the least used of the three in practice. Direct addressing For direct addressing, the operands of the instruction contain the memory address where the data required for execution is stored. For the instruction to be processed the required data must be first fetched from that location. Indirect addressing When using indirect addressing, the operands give a location in memory similarly to direct addressing. However, rather than the data being at this location, there is instead another memory address given where the data actually is located. This is the most flexible of the modes, but also the slowest as two data lookups are required.

22 Instructions are sent from the software and are broken down into micro-ops (smaller sub-operations) in the CPU. This decomposition and execution takes place in a pipeline. The pipeline is like a reverse assembly line. The pipeline is an assembly line (shown here with 9 stages), where each clock tick leads to the execution of a sub-instruction.

23 Parallel pipelines in the one CPU perhaps performance could be doubled
Parallel pipelines in the one CPU perhaps performance could be doubled? Unfortunately it is not that easy. It is not possible to feed a large number of pipelines with data. The memory system is just not powerful enough and data cannot be brought to it quickly enough. Another problem of having several pipelines arises when the processor can decode several instructions in parallel – each in its own pipeline. It is impossible to avoid the wrong instruction occasionally being read in (out of sequence). This is called mis-prediction and results in a number of wasted clock ticks,

24 By making use of more, and longer,
pipelines, processors can execute more instructions at the same time

25 Assoc. Prof. Hussam Elbehiery
Thank you With all my best wishes Assoc. Prof. Hussam Elbehiery


Download ppt "Course Code 114 Introduction to Computer Science"

Similar presentations


Ads by Google