Interprocessor arbitration

Slides:



Advertisements
Similar presentations
Computer Architecture
Advertisements

I/O Organization popo.
Chapter 3 Basic Input/Output
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Global Environment Model. MUTUAL EXCLUSION PROBLEM The operations used by processes to access to common resources (critical sections) must be mutually.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
SE-292 High Performance Computing
Chapter 10 Input/Output Organization. Connections between a CPU and an I/O device Types of bus (Figure 10.1) –Address bus –Data bus –Control bus.
Chapter 6: Process Synchronization
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
CH7 discussion-review Mahmoud Alhabbash. Q1 What is a Race Condition? How could we prevent that? – Race condition is the situation where several processes.
Chapter Thirteen Multiprocessors.
1/1/ / faculty of Electrical Engineering eindhoven university of technology Architectures of Digital Information Systems Part 1: Interrupts and DMA dr.ir.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Concurrency: Mutual Exclusion and Synchronization Why we need Mutual Exclusion? Classical examples: Bank Transactions:Read Account (A); Compute A = A +
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
The Performance of Spin Lock Alternatives for Shared-Memory Microprocessors Thomas E. Anderson Presented by David Woodard.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Chapter 7 Interupts DMA Channels Context Switching.
1 Concurrency: Deadlock and Starvation Chapter 6.
Race Conditions CS550 Operating Systems. Review So far, we have discussed Processes and Threads and talked about multithreading and MPI processes by example.
1 Lecture 20: Protocols and Synchronization Topics: distributed shared-memory multiprocessors, synchronization (Sections )
Introduction to Embedded Systems
Chapter 6 Concurrency: Deadlock and Starvation Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Interconnection networks Characteristic of multiprocessor system – ability of each processor to share a set of main memory modules and I/O devices. This.
Concurrency, Mutual Exclusion and Synchronization.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Mutual Exclusion.
Synchronization with shared memory AMANO, Hideharu Textbook pp
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Computer System Architecture Dept. of Info. Of Computer. Chap. 13 Multiprocessors 13-1 Chap. 13 Multiprocessors n 13-1 Characteristics of Multiprocessors.
Cache Coherence Protocols 1 Cache Coherence Protocols in Shared Memory Multiprocessors Mehmet Şenvar.
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Lec 6 Chap. 13Multiprocessors
1 Lecture 19: Scalable Protocols & Synch Topics: coherence protocols for distributed shared-memory multiprocessors and synchronization (Sections )
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
CS 2200 Presentation 18b MUTEX. Questions? Our Road Map Processor Networking Parallel Systems I/O Subsystem Memory Hierarchy.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Operating System Concepts and Techniques Lecture 13 Interprocess communication-2 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
1 Lecture: Coherence Topics: snooping-based coherence, directory-based coherence protocols (Sections )
CMSC 611: Advanced Computer Architecture Shared Memory Most slides adapted from David Patterson. Some from Mohomed Younis.
Process Synchronization. Concurrency Definition: Two or more processes execute concurrently when they execute different activities on different devices.
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Overview Parallel Processing Pipelining
Cache Coherence in Shared Memory Multiprocessors
Lecture 18: Coherence and Synchronization
Overview Parallel Processing Pipelining
12.4 Memory Organization in Multiprocessor Systems
Concurrency: Mutual Exclusion and Synchronization
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Multiprocessor Introduction and Characteristics of Multiprocessor
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Lecture 2 Part 2 Process Synchronization
Critical section problem
Concurrency: Mutual Exclusion and Process Synchronization
Lecture 25: Multiprocessors
Lecture 25: Multiprocessors
Lecture 24: Multiprocessors
Chapter 6: Synchronization Tools
Lecture: Coherence Topics: wrap-up of snooping-based coherence,
Lecture 19: Coherence and Synchronization
Lecture 18: Coherence and Synchronization
Presentation transcript:

Interprocessor arbitration Arbitration logic resolves bus conflict It would be the part of system bus controller Two types: Serial arbitration procedure Parallel arbitration procedure Arbitration procedures services all processor requests on the basis of established priorities

Serial arbitration procedure

Parallel arbitration logic

Dynamic arbitration algorithms Time slice Polling The LRU FIFO Rotating daisy chain

Interprocessor communication and synchronization Shared memory: Communication is by common area of shared memory Sender alert the receiver by interrupt and data transfer through I/O path.

Loosely coupled : Communication is through I/O channels One calls a procedure which resides in the memory of the destination O.S. in each node controls the communocation

Interprocessor synchrinization Special case of communication where the information transferred is control signals Needed to enforce the correct sequence of processes and to ensure mutually exclusive access to shared writable data.

Multiprocessor systems usually include a no Multiprocessor systems usually include a no. of mechanisms to deal with the synchronization of resources One of the most popular methods is through the use of a binary semaphore

Mutual exclusion with a semaphore A properly functioning multiprocessor system must provide a mechanism that will guarantee orderly access to shared memory and other shared resources This is necessary to protect data from being changed simultaneously by two or more processors This mechanism has been termed mutual exclusion Mutual exclusion enables one processor to exclude or lockout access to a shared resources by other processors when it is in a critical section

A critical section is a program sequence, that once begun, must complete execution before another processor accesses to same shared resources A binary variable called semaphore is often used to indicate whether or not a processor is executing a critical section It is a s/w controlled flag that is stored in a memory location that all processors can access. When the semaphore is equal to 1, it means that a processor is executing a critical program, so that the shared memory is not available to other processors

When the semaphore is equal to 0, the shared memory is available to any requesting processor. If a processor accesses shared memory and is executing a critical section it sets the semaphore and clears it when it is finished Testing and setting the semaphore is itself a critical section. If it is not two or more may set the semaphore simultaneously, allowing them to enter critical section at the same time. A semaphore can be initialized by means of a test and set instruction in conjunction with a hardware lock mechanism.

A hardware lock is a processor generated signal that serves to prevent other processors from using the system bus as long as the signal is active The test and set instruction tests and sets a semaphore and activates a lock mechanism during the time that instruction is being executed. This prevents other processors from changing the semaphore between the time that the processor is testing it and the time that it is setting it. Assume that the semaphore is a memory location symbolized by SEM

Let the mnemonic TSL designate “the test and set while locked” operation. The instruction TSL SEM performs R  M[SEM] test semaphore M[SEM]  1 set semaphore The value in R determines what to do next The last instruction in the program must clear SEM to 0

Cache coherence Conditions for incoherence: A memory scheme is coherent if the value returned on a load instruction is always the value given by the latest store instruction with the same address Conditions for incoherence: cache coherence problems exist in multiprocessors with private caches because of the need to store writable data Consider the 3 processor configuration with private caches as shown below

Solutions to the cache coherence problems: Another configuration that may cause consistency is a DMA activity Solutions to the cache coherence problems: Disallow private caches for each processor and have a shared cache memory associated with main memory Only nonshared and read only data to be stored in caches. – cachable shared writable data -- non cachable Compiler must tag data as cachable or non cachable and the system h/w makes sure that only cachable

data are stored in caches and non cachable data remain in main memory. Centralized global table in its compiler Hardware solutions: Snoopy cache controller– monitor all bus requests All caches constantly monitor the bus for possible write operations Write through policy.