Concurrency. What is Concurrency Ability to execute two operations at the same time Physical concurrency –multiple processors on the same machine –distributing.

Slides:



Advertisements
Similar presentations
Operating Systems ECE344 Midterm review Ding Yuan
Advertisements

Slide 2-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 2 Using the Operating System 2.
Ch. 7 Process Synchronization (1/2) I Background F Producer - Consumer process :  Compiler, Assembler, Loader, · · · · · · F Bounded buffer.
Computer Systems/Operating Systems - Class 8
CSC 501 Lecture 2: Processes. Von Neumann Model Both program and data reside in memory Execution stages in CPU: Fetch instruction Decode instruction Execute.
1 Processes and Pipes COS 217 Professor Jennifer Rexford.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Ceng Operating Systems Chapter 2.1 : Processes Process concept Process scheduling Interprocess communication Deadlocks Threads.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
Process Scheduling. Process The activation of a program. Can have multiple processes of a program Entities associated with a process –Instruction pointer,
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Phones OFF Please Processes Parminder Singh Kang Home:
CE Operating Systems Lecture 5 Processes. Overview of lecture In this lecture we will be looking at What is a process? Structure of a process Process.
Chapter 1. Introduction What is an Operating System? Mainframe Systems
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
Concurrency (Based on:Concepts of Programming Languages, 8th edition, by Robert W. Sebesta, 2007)
Introduction to Processes CS Intoduction to Operating Systems.
Implementing Processes and Process Management Brian Bershad.
Processes and Threads CS550 Operating Systems. Processes and Threads These exist only at execution time They have fast state changes -> in memory and.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-1 Process Concepts Department of Computer Science and Software.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Concurrent Programming. Concurrency  Concurrency means for a program to have multiple paths of execution running at (almost) the same time. Examples:
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
CPS 506 Comparative Programming Languages
1 Chapter 2.1 : Processes Process concept Process concept Process scheduling Process scheduling Interprocess communication Interprocess communication Threads.
Lecture 13 Concepts of Programming Languages Arne Kutzner Hanyang University / Seoul Korea.
ITEC 502 컴퓨터 시스템 및 실습 Chapter 2-1: Process Mi-Jung Choi DPNM Lab. Dept. of CSE, POSTECH.
8-Sep Operating Systems Yasir Kiani. 8-Sep Agenda for Today Review of previous lecture Process scheduling concepts Process creation and termination.
Processes – Part I Processes – Part I. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Review on OSs Upon brief introduction of OSs,
11/13/20151 Processes ICS 240: Operating Systems –William Albritton Information and Computer Sciences Department at Leeward Community College –Original.
Processes CSCI 4534 Chapter 4. Introduction Early computer systems allowed one program to be executed at a time –The program had complete control of the.
Concurrency & Context Switching Process Control Block What's in it and why? How is it used? Who sees it? 5 State Process Model State Labels. Causes of.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Processes, Threads, and Process States. Programs and Processes  Program: an executable file (before/after compilation)  Process: an instance of a program.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
CSE 153 Design of Operating Systems Winter 2015 Midterm Review.
2 Processor(s)Main MemoryDevices Process, Thread & Resource Manager Memory Manager Device Manager File Manager.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Processes and Threads MICROSOFT.  Process  Process Model  Process Creation  Process Termination  Process States  Implementation of Processes  Thread.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Chapter 3: Processes. 3.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 3: Processes Process Concept Process Scheduling Operations.
Chapter three.  An operating system executes a variety of programs:  A batch system executes jobs.  A time-shared systems has user programs or tasks.
1 Module 3: Processes Reading: Chapter Next Module: –Inter-process Communication –Process Scheduling –Reading: Chapter 4.5, 6.1 – 6.3.
Threads prepared and instructed by Shmuel Wimer Eng. Faculty, Bar-Ilan University 1July 2016Processes.
Tutorial 2: Homework 1 and Project 1
Processes and threads.
Chapter 3: Processes.
Process Management Process Concept Why only the global variables?
Chapter 3: Process Concept
Topic 3 (Textbook - Chapter 3) Processes
Lecture Topics: 11/1 Processes Process Management
Process Management Presented By Aditya Gupta Assistant Professor
Chapter 3: Process Concept
Chapter 3: Processes.
Lecture 12 Concepts of Programming Languages
CS 143A Quiz 1 Solution.
Operating Systems Lecture 6.
Process & its States Lecture 5.
Processes Hank Levy 1.
Processes and Process Management
Subject : T0152 – Programming Language Concept
Implementing Mutual Exclusion
Chapter 3: Processes.
Implementing Processes, Threads, and Resources
CS510 Operating System Foundations
CS333 Intro to Operating Systems
Outline Chapter 2 (cont) Chapter 3: Processes Virtual machines
Processes Hank Levy 1.
Chapter 3: Process Concept
Chapter 3: Process Management
Presentation transcript:

Concurrency

What is Concurrency Ability to execute two operations at the same time Physical concurrency –multiple processors on the same machine –distributing across networked machines Logical concurrency –illusion or partial parallelism Designer/programmer doesn’t care which!

Real or Apparent depends on your point of view Multiple computers in distributed computing or multiprocessors on a computer Multiple clients on same or multiple computers Multiple servers on one or many machines Where does the complexity lie? –It varies on how you design the system

Why is concurrency important? One machine is only capable of a limited speed Multiple machines/processors –share workload and gain better utilization –optimize responsibility/requirements to each machine’s ability –place secure processes in secure environments –parallel processors are tailored to the problem domain Single machine –OS can give limited parallelism through scheduling

Concurrency considerations What level of concurrency to consider How it is handled on a single processor –understand process scheduling How to share resources How to synchronize activity How to synchronize I/O specifically

Process Scheduling

Process The activation of a program. Can have multiple processes of a program Entities associated with a process –Instruction pointer –user who owns it –memory location of user data areas –own run-time stack

Process Scheduling Order of process execution is unpredictable Duration of execution is unpredictable Whether there is an appearance or real concurrency is not relevant to the designer. There will generally be a need to resynchronize activity between cooperating processes

Context switch When OS switches running process, it manipulates internal process tables Loading/storing registers must be done Thread s minimize that effort. Same process but a different stack. Necessary overhead but must balance overhead with advantage of concurrency

Operating Systems Scheduling Ready Running 200 Blocked Process 200 blocks when reading from the disk Ready Running 205 Blocked

What other activities are important in scheduling? Jobs go from RUNNING to READY when they lose their time slot Jobs go from blocked to READY when the I/O operation they are waiting for completes Jobs go from RUNNING to being removed completely upon exit

Concurrency at many levels Process level.. –Unix fork command…. (review example) –network client-server level Subprocess level (like a procedure).. thread Statement level Scheduling level

How do you get limited parallelism from the OS? time Process A runs onCPU; blocks on read Disk reads for B while blocked Disk reads for A while A blocked Process C runs for it’s time slice Process B runs cpu disk controller There are times when both processors (CPU and controller) are busy so real parallelism does occur but not at the level of the CPU

What if YOU have to create the concurrency?

High-level Concurrency

Unix and 95/98/NT Unix process concurrency use fork and exec fork clone self parent and child differ by ppid two processes exec new process replaces original single different process 95/98/NT thread part of same process own copy of locals shared copy of globals shared resources like file descriptors each has own activation record parent and child do not always begin/continue at same spot as in fork thread ceases execution on return _beginthread() needs procedure Createprocess() like fork/exec

Unix fork() process level void main() { fork(); cout << “Hi”; } produces HiHi Question is… who “Hi”ed first?

void main() { fork(); cout << “Hi”;} Process 1 time void main() { fork(); cout << “Hi”;} Process 1 void main() { fork(); cout << “Hi”;} Process 2 void main() { fork(); cout << “Hi”;} Process 2 OR Output Hi Hi OR

Another Example fork(); cout << “a”; fork(); cout << “b”; How many processes are generated? How many possible outputs can you see?

A more common unix Example

Use in servers (and clients) (not important for us) cid = fork(); if (cid > 0) { // this is parent code … } else { // this is child code … } While (1) { talksock = accept (listsock... Use listsock close talksock repeat loop close listsock use talksock … exit() }

Threads (windows style)

Non-threaded void main() { count(4); } void count(int i) {int j; for (j=1; j<=i; j++) cout << j<<endl; }

Threaded void count(int i) {int j; for (j=1; j<=i; j++) cout << j<<endl; } void main() { _beginthread(( (void(*) void()) count, 0, (void*) 5); count(4); }

The Synchronization Problem

Concurrency frequently requires synchronization! Cooperation Synchronization A is working on something B must wait for A to finish x = f + g; h = x + y; B does this A does this Competition Synchronization A needs to read a stream B needs to read the stream Only one can read at a time instr >> a; instr >> b; B does this A does this We’ll see how to do this later!

The synchronization problem time Shared Memory Task ATask B T=3 T = T + 1 T = T * 2 fetch T(3) incr T(4) lose CPU (time) fetch T(3) double T(6) store T T=6 get CPU store TT=4 TRY THIS: What other combinations could occur?

The essence of the problem There are times during which exclusive access must be granted These areas of our program are called critical sections Sometimes this is handled by disabling interrupts so process keeps processor Most often through a more controlled mechanism like a semaphore

Where do we see it? EVERYWHERE Database access Any data structures File/Printer/Memory resources Any intersection of need between processing entities for data

Synchronization Solutions for c/c++ (java later)

Semaphore One means of synchronizing activity Managed by the operating system –implementing yourself will not work (mutual exclusion) Typical wait and release functions called by applications Count associated –0/1 binary implies only a single resource to manage –larger value means multiple resources Queue for waiting processes (blocked)

wait and release wait ( semA ) { if semA>0 then decr semA; else put in semA queue; (block it) } release ( semA ) {if semA queue empty incr semA; else remove job in semA queue (unblock it) } This represents what the operating system does when an application asks for access to the resource by calling wait or release on the semaphore

Standard example producer-consumer semaphore fullspots, emptyspots; fullspots.count=0; emptyspots.count= BUFLEN; task producer; loop wait (emptyspots); DEPOSIT(VALUE); release (fullspots); end loop; end producer; task consumer; loop wait (fullspots); FETCH(VALUE); release (emptyspots); end loop; end producer; shared resource Why do you need TWO semaphores? Are adding and removing the same?

Competition Synchronization What if multiple processes want to put objects in the buffer? We might have a similar synchronization problem. Use BINARY semaphore for access COUNTING semaphores for slots semaphore access, fullspots, emptyspots; access.count=1; // BINARY fullspots.count=0; emptyspots.count= BUFLEN; task producer; loop wait (emptyspots); wait (access); DEPOSIT(VALUE); release (access); release (fullspots); end loop; end producer; task consumer; loop wait (fullspots); wait (access); FETCH(VALUE); release (access); release (emptyspots); end loop; end producer; Remind you of a printer queue problem?

Statements Parallel Fortran

Statement-level Parallel Process PARALLEL LOOP 20 K=1,20 PRIVATE (T) DO 20 I = 1, 500 T=0.0; DO 30 J = 1, T = T + ( B(J,K)*A(I,J)) 20 C(I,K)=T