Presentation is loading. Please wait.

Presentation is loading. Please wait.

– Structure and functions of the processor 1. 1

Similar presentations


Presentation on theme: "– Structure and functions of the processor 1. 1"— Presentation transcript:

1 1. 1. 1 – Structure and functions of the processor 1. 1
1.1.1 – Structure and functions of the processor – Types of processor A-Level Computing

2 Specification Overview
Specification Points Structure and function of the processor (a) The Arithmetic and Logic Unit; ALU, Control Unit and Registers (Program Counter; PC, Accumulator; ACC, Memory Address Register; MAR, Memory Data Register; MDR, Current Instruction Register; CIR). Buses: data, address and control: how this relates to assembly language programs. (b) The fetch-decode-execute cycle, including its effect on registers. (c) The factors affecting the performance of the CPU, clock speed, number of cores, cache. (d) Von Neumann, Harvard and contemporary processor architecture Types of processor (a) The differences between and uses of CISC and RISC processors. (b) Multicore and Parallel systems.

3 Context At the centre of all modern computer systems is an incredible device referred to as the Central Processing Unit (CPU), microprocessor or simply the processor. The processor is the brain of the computer; it carries out all the mathematical and logical operations necessary to execute the instructions given to it by the user. It is one of the most expensive parts of a computer system and upgrading a computer’s processor remains one of the best ways of increasing a computer’s performance. Today, processors can be found in anything from smartphones and tablets to washing machines and microwaves. Without them, modern life would look very different. As you might expect, processor designs are extremely complex and the way they are constructed (their architecture) changes rapidly. Innovations in specialist materials and improved design techniques are utilised to make them faster and more efficient.

4 How is a CPU made? To create a processor’s architecture, specialists will first design the necessary circuitry. Silicon discs are then produced by melting sand, refining it and finely slicing the resulting crystals. Next, the circuit diagrams are transferred to the silicon discs, resulting in the creation of thousands of minute transistors; these are joined together using copper to create small integrated circuits. Finally, these are packaged with the pins necessary to connect the processor to a computer’s motherboard. This process takes place in a ‘clean room’, which is a controlled and sterile environment; the smallest amount of dust could ruin the silicon. This is a very basic overview of the extraordinary manufacturing process used to create CPUs. It is well worth watching some videos to find out some more!

5 Machine Code Instructions
When code is written in languages such as Python or Visual Basic .NET, the code is readable and understandable by humans and is known as source code. Processors have no understanding of source code and cannot execute it directly unless it is translated into machine code. A compiler is the name given to any piece of software that takes source code and converts it to machine code. Source code, written as a text file, allows software to be created without the need to understand the inner workings of the instruction set of the target processor. The programming language hides a lot of the complexities of the processor, such as memory addressing modes

6 Machine Code Instructions
Each processor architecture (for example, ARM or x86) has its own set of instructions. The compiler must specify the target architecture before compiling, as the differing architectures are not compatible with all computers. Each line of source code can be compiled into many lines of machine code. The table via this URL highlights some of the key differences between CPU architectures:

7 Machine Code Instructions
Machine code is the binary representation of an instruction, split into the opcode and the operand (instruction’s data). Assembly language or code uses text to represent machine code instructions, or mnemonics. Each processor architecture has a set of instructions that it can run; this is known as the instruction set. Despite many differences between instruction sets, machine code does uphold a common structure: NOTE - The size of instructions differs from architecture to architecture and can even differ between instructions. Bits 1-6 Bits 7-16 Opcode Operand (Data)

8 Instruction Sets Every instruction available, in a given instruction set, will have a unique binary value known as the opcode. When an instruction is executed by the processor, it uses the opcode to determine which instruction to run (i.e. what action it is to perform). To the right is an example instruction set, which is not based on any real-life processor and should be treated as an example only. NOTE - ACC stands for accumulator, a special purpose register (we will look into this in greater depth during this scheme of learning). Opcode Assembly  mnemonic Description MOV Moves a value to a register ADD Adds a value and stores in ACC SUB Subtracts a value and stores in ACC MLT Multiplies a value and stores in ACC

9 Instruction Sets Each opcode represents a different instruction and mnemonics are used to make development in Assembly language easier, as dealing with binary can get very complex. As an example, the table on the right shows a simple program that will perform the calculation (4 5) * 2: Opcode Data Value of ACC MOV value 0 to ACC ADD 4 to ACC ADD 5 to ACC MUL 2 to ACC

10 Exam Tip! Machine code and assembly code form the main focus of another part of the course and questions based on these topics may not appear in questions linked to the processor and the FDE cycle. However, the terms opcode and machine code are key in answering questions in this section successfully.

11 Relationship between machine code and assembly language
Assembly code is the step above machine code, allowing the coder to write code using mnemonics, which represent machine code instructions. Each assembly code instruction has a one-to-one mapping with a machine code instruction. This is unlike source code, which has one-to-many mapping. When converting assembly code to machine code you will use an assembler.

12 Little Man Computer (LMC)
The little man computer (LMC) is a simple architecture designed to help you understand the concepts of machine code and instruction sets. Unlike real instruction sets, LMC has a very limited selection of instructions. There are a range of LMC instructions you must know for the exam (see appendix 5e). An xx in the opcode refers to the data part (operand) of the instruction (only if required, as not every instruction needs data). For example, in order to add, you first need to know what you are adding. To keep things simple, the LMC has only two registers, ACC and PC. Data and instructions are stored in memory locations known as mailboxes, this is directly synonymous with a standard von Neumann-based computer (we will explore this later), conceptually a mailbox represents one single byte of memory. Mailboxes can be referred to directly or indirectly, through the use of labels. A label is a text symbol that represents a mailbox, making coding in LMC easier. When a label is used to represent a mailbox, the LMC assembler assigns a suitable mailbox as the code is assembled.

13 Little Man Computer (LMC)
In questions mnemonics will always be given according to the left hand column below. Different implementations of LMC have slight variations in mnemonics used and to take this into account the alternative mnemonics in the right hand column will be accepted in learners’ answers.

14 Little Man Computer (LMC) Task
Visit this URL - In the LMC simulator type in the following instructions: INP STA ADD OUT HLT DAT Using appendix 5e and what the simulator demonstrates write out line for line what is taking place.

15 Little Man Computer (LMC) Task
In the LMC simulator type in the following instructions: INP STA FIRST ADD FIRST OUT SUB FIRST HLT FIRST DAT Using appendix 5e and what the simulator demonstrates write out line for line what is taking place.

16 Little Man Computer (LMC) Task
What you should have seen… After the program is compiled, you should see from mailbox 0 to 7 the instructions 901, 309, 901, 109, 902, 901, 209, 902.  The Program Counter should start at 0 (click on "Reset" if necessary). DAT is the tenth instruction of your program, so it refers to mailbox 9 (0-indexed counting).  FIRST is the identifier that has been declared to represent this mailbox in the assembly language program. When you click on "Run" or "Step", the Message Box will describe the actions of each instruction. After the first INP instruction, the Accumulator has a copy of the first value entered in the In Box. After the STA instruction, the input value is copied from the Accumulator to mailbox 9. After the second INP instruction, the Accumulator has a copy of the second value entered into the In Box (try values that will lead to three and four digit sums – what happens?). After the ADD instruction, the Accumulator value represents the sum of the two input values -- 1 means ADD and 09 refers to the mailbox where the value to be added to the Accumulator is stored. After the third INP instruction, the Accumulator has a copy of the third value entered into the In Box (try values that will lead to positive and negative results – what happens?). After the SUB instruction, the Accumulator value represents the difference between the two input values means SUBTRACT and 09 refers to the mailbox where the value to be subtracted from the Accumulator is stored.

17 Little Man Computer (LMC) Task
In the LMC simulator type in the following instructions: INP STA FIRST STA SECOND LDA FIRST OUT LDA SECOND HLT FIRST DAT SECOND DAT Using appendix 5e and what the simulator demonstrates write out line for line what is taking place.

18 Little Man Computer (LMC) Task
What you should have seen… After the program is compiled, you should see from mailbox 0 to 7 the instructions 901, 309, 901, 310, 509, 902, 510, 902.  The Program Counter should start at 0 (click on "Reset" if necessary). DAT is the tenth and eleventh instruction of your program, so they refer to mailboxes 9 and 10 (0-indexed counting).  FIRST and SECOND are the identifiers that have been declared to represent these mailboxes in the assembly language program. When you click on "Run" or "Step", the Message Box will describe the actions of each instruction. After the first INP instruction, the Accumulator has a copy of the first value entered in the In Box. After the STA instruction, the input value is copied from the Accumulator to mailbox means STORE and 09 refers to the mailbox to store into. After the second INP instruction, the Accumulator has a copy of the second value entered into the In Box (use a different value than the first). After the second STA instruction, the second input value is copied from the Accumulator to mailbox 10. After the LDA instruction, the Accumulator is reset to the first input value.  This value has been retrieved from mailbox means LOAD and 09 refers to the mailbox to load from.  Note: since your Accumulator can only work on one value at a time, you have to repeatedly STORE and LOAD values from memory to keep them from getting erased by the next operation. After the OUT instruction, the Out Box has a copy of the value in the Accumulator -- the first input value.

19 Components of a CPU General purpose registers R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register I will place a copy of this diagram onto the digital learning space and Google Classroom for easy annotation and note taking within your blogs. Program Counter Status Register Control Unit Memory Address Register Memory Data Register Based upon a Von-Neuman machine-Stored Program Approach. Where both the data and program stored in main memory…

20 Components of a CPU We should now be familiar with the concept that when a program is run, program and data are copied from the hard drive into Main Memory. This then brings about the FETCH – DECODE – EXECUTE cycle: Fetch: The next instruction is fetched from Main Memory. Decode: The instruction gets interpreted/decoded, signals produced to control other internal components (ALU for example). Execute: The instructions get executed (carried out)

21 Registers A processor contains many registers, some reserved for specific purposes while others are used for general purpose calculations. A register is a small block of memory, usually 4 or 8 bytes, which is used as temporary storage for instructions as they are being processed. This temporary storage runs at the same speed as the processor. Machine code instructions can only work if they are loaded into registers. General purpose registers are used as programs run, to enable calculations to be made, and can be used for any purpose the programmer (or compiler) chooses. Special purpose registers are crucial to how the processor works. Values are loaded in and out of registers during the execution of a process. Some of the key special purpose registers are: program counter memory address register and memory data register current address register accumulator.

22 The Program Counter (PC)
MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 The program counter holds the address of the next instruction that is to be fetched-decoded-executed. This will increment automatically as the current instruction is being decoded. Sometimes the PC is incremented by one, thus simply to go to the next instructions, but sometimes a branch (BRA, BRZ, BRP) will take place (for example when a procedure is called), which will need an instruction from somewhere else in memory. R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register Program Counter: 500 Status Register Control Unit Memory Address Register Memory Data Register

23 Memory Address Register (MAR)
MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register Program Counter: 500 Status Register Control Unit Memory Address Register: Memory Data Register The Memory Address Register (MAR) holds the address of the current instruction being executed. It points to the relevant location in memory where the required instruction is (at this stage the address is simply copied from the Program Counter). MAR ← [PC] (contents of Program Counter copied to the Memory Address Register)

24 Memory Data Register (MDR)
MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register Program Counter: 500 Status Register Control Unit Memory Address Register: Memory Data Register: LDA 1000 The Memory Data Register can contain both instructions and data. At this stage, an instruction has been fetched and is being stored here enroute to the Current Instruction Register. The instruction is copied from the memory location pointed to by the MAR. MBR ← [Memory] addressed (Contents of addressed memory is copied to the memory buffer register)

25 Current Instruction Register (CIR)
MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register: LDA 1000 Program Counter: 500 Status Register Control Unit Memory Address Register: Memory Data Register: The Current Instruction Register is used to store the current instruction to be decoded and executed (copied from the MDR). As the instruction in the CIR is being decoded and executed, the next instruction is being fetched into the MDR.

26 Decoding and executing the instruction
MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register: LDA 1000 Program Counter: 503 Status Register Control Unit Memory Address Register: 500 Memory Data Register: The instruction in the CIR gets decoded. In this example, the instruction is telling the processor to load the value in memory location 1000 (03) to the accumulator (one of the general purpose registers are usually used for the accumulator). As this happens the Program Counter automatically increments. PC ← [PC] + 1 [CIR] decoded and executed

27 Current Instruction Register: Memory Address Register:
The Control Unit (CU) MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 The control unit co-ordinates all of these fetch-decode-execute activities. At each clock pulse, it controls the movement of data and instructions between the registers, main memory and input and output devices. Some instructions may take less time than a single clock cycle, but the next instruction will only start when the processor executes the next cycle. R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register: LDA 1000 Program Counter: 503 Status Register Control Unit Memory Address Register: 1000 Memory Data Register: 03

28 Current Instruction Register: Memory Address Register:
Status Register (SR) MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 The Status Register stores a combination of bits used to indicate the result of an instruction. For example one bit will be set to indicate that an instruction has caused an overflow. Another bit will be set to indicate that the instruction produced a negative result. The Status Register also indicates whether an interrupt has been received (linking back to previous chapters). R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register: LDA 1000 Program Counter: 503 Status Register Control Unit Memory Address Register: 1000 Memory Data Register: 03

29 Arithmetic Logic Unit (ALU)
MAIN MEMORY ADDRESS CONTENTS 500 LDA 1000 503 ADD 1001 506 STO 1002 …. 1000 03 1001 04 1002 05 The Arithmetic Logic Unit carries out any arithmetic and logical operations (calculations and value comparisons) required by any instruction that is executed. Calculations include floating point multiplication and integer division, while logic operations include comparison tests such as greater than or less than. For example instruction at 503 would require the Arithmetic Logical Unit to add the number in location 1001 to the value already in the accumulator. R0 R1 R2 R3 R4 R5 Arithmetic Logic Unit Current Instruction Register: LDA 1000 Program Counter: 503 Status Register Control Unit Memory Address Register: 1000 Memory Data Register: 03

30 Accumulator (ACC) Any instruction that performs a calculation makes use of the accumulator (ACC). Many instructions operate on, or update, the ACC. If a subtraction instruction is run, it performs the subtraction using the data part of the instruction and stores the result in the ACC. Calculations take a step-by-step approach, so the result of the last calculation is part of the next. Some instruction sets, like LMC, use ACC as part of the calculation. Only one value needs to be given in the data part of the instruction. This is shown here… Instruction ACC Initial value ADD 4 4 ADD 2 6 SUB 1 5

31 Summary Video

32 Buses A bus is a path down which information can pass. If you think of a bus as being a set of wires which is reserved for a particular type of information you won’t go far wrong, although it doesn’t have to be wires. There are three different types of bus that we are interested in. They will all look the same but the difference is in what they are used to convey.

33 Buses Data Bus - This is used to carry the data that needs to be transferred from one part of the hardware to another, often memory. Address Bus - This carries the address of the location to which the data in the data bus should be delivered. The address and the data travel in tandem to each of the locations in turn until the component being visited recognises its address, it then grabs the data that is being transported. Control Bus – Previously we learnt that one of the components of the processor was the Control Unit (CU), whose job is to control and coordinate the operations of the rest of the processor. It can only do this if it can send commands to the different components of the system. The control bus is used to carry these control signals.

34 Summary Video

35 Exam Tip! You must be able to describe the three types of bus and the type of information sent over each of them.

36 Processor Speed Processor speed is measured by the number of clock cycles that the processor can perform in a second and is measured in hertz (Hz). A clock cycle is one increment of the CPU clock. During a clock cycle the processor can fetch, decode and execute a simple instruction, such as load, store or jump. More complex instructions take more than one clock cycle. E.g. A 3.2 GHz processor allows for 3.2 billion clocks/cycles per second.

37 Processor Speed Processors have progressed to a level where it is hard to increase speed. When the number of clock cycles per second is increased, the transistors within the processor have to switch faster, which generates more heat. Unless the heat is drawn away from the processor, it can easily overheat. A processor can reach temperatures in excess of 300oC degrees, which is hotter than an oven, in a matter of seconds. If you open up a desktop computer, you will see a large heat sink and fan attached to the processor, this has the job of pulling all of this heat away from the processor. Processor manufacturers, such as AMD or Intel, try other methods to increase the performance of their CPUs. This can include increasing the number of cores (processors) in the CPU, making existing circuitry more efficient, or developing new instructions. In order to make a fair comparison of these processors, they would need to be subjected to controlled tests.

38 Task Investigate the factors affecting the performance of the CPU, including clock speed, number of cores, cache. Write a short essay detailing your understanding of each of the points listed above. Be sure to use a range of headings/sub-headings to make your understanding very clear. Use diagrams and drawings to support as needed. Keep a list of references used at the end of your essay.

39 Common Architectures – Von Neumann, Harvard, etc.
Take some time to have a look at each of these architectures in more detail using the ELEVATE textbook and other appropriate resources. Record your findings within your blog.

40 Types of Processor You have seen one type of processor architecture (von Neumann), but there are many other types of architecture and many different ways of implementing them. Generally, however, computer architectures can be categorised as either RISC or CISC. Reduced instruction set computer (RISC) Complex instruction set computer (CISC)

41 Reduced instruction set computer (RISC)
RISC architectures support only a small number of very simple instructions, which can be completed in a single clock cycle. This means that individual instructions are executed extremely quickly, but more instructions are needed to complete a given task. RISCs were developed in the 1980s when it was realised that most computer programs use only a very small number of the available instructions (around 20-25%). Creating an architecture that used only the most popular instructions meant much more efficient processing. A side effect of the simpler instructions set is that RISC architectures need a greater number of registers to provide faster access to data when programs are being executed.

42 Complex instruction set computer (CISC)
CISC architectures support a large number of complicated instructions. This means that instructions can take many clock cycles to complete, as a single CISC instruction might be made up of a number of smaller RISC-type instructions. Thus, CISCs tend to be slightly slower than their RISC counterparts. However, they can support a much wider range of addressing modes. When many programs were written in Assembly language, CISCs were very popular, as writing programs for them is much easier.

43 RISC vs. CISC The main difference between RISC and CISC is the CPU time taken to execute a given program. CPU time is generally calculated using the following formula: CPU time = [(number of instructions)×(average cycles per instruction)×(seconds per cycle)] RISC architectures seek to shorten execution time by reducing the average clock cycles per instruction. Conversely, CISC systems try to shorten execution time by reducing the number of instructions per program.

44 RISC vs. CISC – A brief summary
Simple instructions Complex instructions (often made up of many simpler instructions) Fewer instructions A large range of instructions Fewer addressing modes Many addressing modes Only LOAD and STORE instructions can access memory Many instructions can access memory Takes more cycles per instruction Takes one cycle per instruction More compact software code More complicated software code Summary:

45 Application of RISC and CISC
Question: Is it better to make more complicated instructions available that take many cycles to complete or is it better to restrict you to a smaller, simpler instruction set that each only take a single cycle to complete? Answer: It depends. Up until recently, the major chip makers preferred the CISC approach. Each generation of their chips offered larger and richer instruction sets compared to the one before. But now the RISC approach seems to be favoured one. Say you want to multiply two numbers 'a' and 'b'. In a CISC chip a single instruction such as MULT a,b is available. The chip-maker adds more and more complex hardware circuits within the CPU to carry out these instructions. So the trade-off is more complex hardware to support simpler software coding. The compiler, when seeing a multiply command written in high level language source code can generate a single machine code instruction to carry out the task - job done. In a RISC chip it is the other way around - keep the hardware simple and let the software be more complicated. There may be no single multiply instruction available, so the compiler now has to generate more lines of code such as LOAD a from memory into register1 LOAD b from memory into a register2 PROD Register1, Register2 (multiply) STORE Answer back into memory But each of those instructions can be carried out in a single cycle. You can also use the pipeline method to speed it up even more (since 'a' and 'b' do not depend on each other). So overall the RISC approach may be faster.

46 Multicore Systems The classic von Neumann architecture uses only a single processor to execute instructions. In order to improve the computing power of processors, it was necessary to increase the physical complexity of the CPU. Traditionally this was done by finding new, ingenious ways of fitting more and more transistors onto the same size chip. There was even a rule called Moore’s Law, which predicted that the number of transistors that could be placed on a chip would double every two years.

47 Multicore Systems However, as computer scientists reached the physical limit of the number of transistors that could be placed on a silicon chip, it became necessary to find other means of increasing the processing power of computers. One of the most effective means of doing this came about through the creation of multicore systems (computers with multiple processors). Multicore microprocessors are now very popular, where the processor will have several cores allowing for multiple programs or threads to be run at once.

48 Parallel Systems One of the most common types of multicore system is the parallel processor. The chances are that you’re using a parallel processing system right now. They tend to be referred to as dual-core (two processors) or quad-core (four processors) computers. In parallel processing, two or more processors work together to perform a single task. The task is split into smaller sub-tasks (threads). These tasks are executed simultaneously by all available processors (any task can be processed by any of the processors). This hugely decreases the time taken to execute a program, however software has to be specially written to take advantage of these multicore systems.

49 Parallel Systems Parallel computing systems are generally placed in one of three categories: Multiple instruction, single data (MISD) systems have multiple processors, with each processor using a different set of instructions on the same set of data. Single instruction, multiple data (SIMD) computers have multiple processors that follow the same set of instructions, with each processor taking a different set of data. Essentially SIMD computers process lots of different data concurrently, using the same algorithm. Multiple instruction, multiple data (MIMD) computers have multiple processors, each of which is able to process instructions independently of the others. This means that a MIMD computer can truly process a number of different instructions simultaneously. MIMD is probably the most common parallel computing architecture.

50 Parallel Systems All the processors in a parallel processing system act in the same way as standard single-core (von Neumann) CPUs, loading instructions and data from memory and acting accordingly. However, the different processors in a multicore system need to communicate continuously with each other in order to ensure that if one processor changes a key piece of data (for example, the players’ scores in a game), the other processors are aware of the change and can incorporate it into their calculations. There is also a huge amount of additional complexity involved in implementing parallel processing, because when each separate core (processor) has completed its own task, the results from all the cores need to be combined to form the complete solution to the original problem.

51 Parallel Systems This complexity meant that in the early days of parallel computing it was still sometimes faster to use a single processor, as the additional time taken to co-ordinate communication between processors and combine their results into a single solution was greater than the time saved by sharing the workload. However, as programmers have become more adept at writing software for parallel systems, this has become less of an issue.

52 Co-Processing Another common way of implementing a multi-processor system is by using a co-processor. Like parallel processing, this involves adding another processor to the computer, but in a co-processor system the additional processor is responsible for carrying out a specific task, such as graphics processing or advanced mathematical operations. The co-processor and the central processor operate on different tasks at the same time, which has the effect of speeding up the overall execution of the program. This is one of the reasons why computers with graphics cards run computer games much more smoothly than those without.

53 Co-Processing Some common examples of co-processors are:
Floating point units (FPU) – These are built into the main CPU and run floating point mathematical operations. These tend to be more processor intensive than standard arithmetic. Digital signal processor (DSP) – Commonly used to process sound effects and merge sound channels so that they can be output in stereo or surround sound. Graphics processing units (GPU) – Most modern computers have a GPU installed, either on the motherboard or on a separate card. These perform the massively complex 3D calculations needed for games, transparency effects used to make the user interface look good and so on.

54 Advantages and disadvantages of multicore systems
More jobs can be done in a shorter time because they are executed simultaneously. It is difficult to write programs for multicore systems, ensuring that each task has the correct input data to work on. Tasks can be shared to reduce the load on individual processors and avoid bottlenecks. Results from different processors need to be combined at the end of processing, which can be complex and adds to the time taken to execute a program. Not all tasks can be split across multiple processors. Summary:


Download ppt "– Structure and functions of the processor 1. 1"

Similar presentations


Ads by Google