CPS110: Intro to processes, threads and concurrency Author: Landon Cox.

Slides:



Advertisements
Similar presentations
Categories of I/O Devices
Advertisements

Slide 2-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter 2 Using the Operating System 2.
Race Conditions. Isolated & Non-Isolated Processes Isolated: Do not share state with other processes –The output of process is unaffected by run of other.
EEE 435 Principles of Operating Systems Principles and Structure of I/O Software (Modern Operating Systems 5.2 & 5.3) 5/22/20151Dr Alain Beaulieu.
Introduction to Operating Systems CS-2301 B-term Introduction to Operating Systems CS-2301, System Programming for Non-majors (Slides include materials.
Processes CSCI 444/544 Operating Systems Fall 2008.
OS Fall ’ 02 Introduction Operating Systems Fall 2002.
Threads 1 CS502 Spring 2006 Threads CS-502 Spring 2006.
OS Spring’03 Introduction Operating Systems Spring 2003.
Multithreading in Java Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
COMP 14: Intro. to Intro. to Programming May 23, 2000 Nick Vallidis.
1 Threads Chapter 4 Reading: 4.1,4.4, Process Characteristics l Unit of resource ownership - process is allocated: n a virtual address space to.
Concurrency Recitation – 2/24 Nisarg Raval Slides by Prof. Landon Cox.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
1 Lecture 4: Threads Operating System Fall Contents Overview: Processes & Threads Benefits of Threads Thread State and Operations User Thread.
Process Management. Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication.
Introduction to Threads CS240 Programming in C. Introduction to Threads A thread is a path execution By default, a C/C++ program has one thread called.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Threads and Processes.
Operating Systems Lecture 2 Processes and Threads Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing Liu School of.
Lecture 2 Foundations and Definitions Processes/Threads.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Games Development 2 Concurrent Programming CO3301 Week 9.
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
Processes Introduction to Operating Systems: Module 3.
CS333 Intro to Operating Systems Jonathan Walpole.
C o n f i d e n t i a l 1 Course: BCA Semester: III Subject Code : BC 0042 Subject Name: Operating Systems Unit number : 1 Unit Title: Overview of Operating.
CPS110: Implementing threads Landon Cox. Recap and looking ahead Hardware OS Applications Where we’ve been Where we’re going.
Processes CS 6560: Operating Systems Design. 2 Von Neuman Model Both text (program) and data reside in memory Execution cycle Fetch instruction Decode.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Introduction to Operating Systems and Concurrency.
Operating Systems CSE 411 CPU Management Sept Lecture 10 Instructor: Bhuvan Urgaonkar.
Concurrency, Processes, and System calls Benefits and issues of concurrency The basic concept of process System calls.
Department of Computer Science and Software Engineering
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Processes and Threads.
Lecture on Central Process Unit (CPU)
CPS110: Intro to processes Landon Cox. OS Complexity  Lines of code  XP: 40 million  Linux 2.6: 6 million  (mostly driver code)  Sources of complexity.
Review: threads and processes Landon Cox January 20, 2016.
1 Critical Section Problem CIS 450 Winter 2003 Professor Jinhua Guo.
Agenda  Quick Review  Finish Introduction  Java Threads.
Interrupts and Exception Handling. Execution We are quite aware of the Fetch, Execute process of the control unit of the CPU –Fetch and instruction as.
CPS110: Implementing threads on a uni-processor Landon Cox January 29, 2008.
1 Introduction to Threads Race Conditions. 2 Process Address Space Revisited Code Data OS Stack (a)Process with Single Thread (b) Process with Two Threads.
6/27/20161 Operating Systems Design (CS 423) Elsa L Gunter 2112 SC, UIUC Based on slides by Roy Campbell, Sam King,
1 Module 3: Processes Reading: Chapter Next Module: –Inter-process Communication –Process Scheduling –Reading: Chapter 4.5, 6.1 – 6.3.
Scheduler activations Landon Cox March 23, What is a process? Informal A program in execution Running code + things it can read/write Process ≠
CS 162 Discussion Section Week 2. Who am I? Prashanth Mohan Office Hours: 11-12pm Tu W at.
Introduction to Operating Systems Concepts
Processes and threads.
Operating Systems CMPSC 473
Chapter 3: Process Concept
Operating Systems (CS 340 D)
Scheduler activations
COMPSCI210 Recitation 12 Oct 2012 Vamsi Thummala
CS399 New Beginnings Jonathan Walpole.
Process Management Presented By Aditya Gupta Assistant Professor
Review: threads and synchronization
Intro to Processes CSSE 332 Operating Systems
Chapter 4: Threads.
Operating Systems and Systems Programming
Lecture Topics: 11/1 General Operating System Concepts Processes
Threads Chapter 4.
Foundations and Definitions
CSE 153 Design of Operating Systems Winter 2019
CS703 – Advanced Operating Systems
EECE.4810/EECE.5730 Operating Systems
Threads CSE 2431: Introduction to Operating Systems
Presentation transcript:

CPS110: Intro to processes, threads and concurrency Author: Landon Cox

Intro to processes  Decompose activities into separate tasks  Allow them to run in parallel  “Independently” (what does this mean?)  “without dependencies” …  Key OS abstraction: processes  Run independently of each other  Don’t have to know about others

Intro to processes  Remember, for any area of OS, ask  What interface does the hardware provide?  What interface/abstraction does the OS provide?  What is physical reality?  Single computer (CPUs + memory)  Execute instructions from many programs  What does an application see?  Each app “thinks” it has its own CPU + memory

Hardware, OS interfaces Hardware OS Applications Memory CPUs CPU, Mem Job 1 CPU, Mem Job 2 CPU, Mem Job 3

What is a process?  Informal  A program in execution  Running code + things it can read/write  Process ≠ program  Formal  ≥ 1 threads in their own address space  (soon threads will share an address space)

Parts of a process  Thread  Sequence of executing instructions  Active: does things  Address space  Data the process uses as it runs  Passive: acted upon by threads

Play analogy  Process is like a play performance  Program is like the play’s script Threads Address space What are the threads? What is the address space?

What is in the address space?  Program code  Instructions, also called “text”  Data segment  Global variables, static variables  Heap (where “new” memory comes from)  Stack  Where local variables are stored

Review of the stack  Each stack frame contains a function’s  Local variables  Parameters  Return address  Saved values of calling function’s registers  The stack enables recursion

const1=1 const2=0 const1=1 const2=0 main Example stack tmp=1 RA=0x804838c tmp=1 RA=0x804838c A RA=0x B const=0 RA=0x const=0 RA=0x C tmp=0 RA=0x tmp=0 RA=0x A 0xfffffff 0x0 Memory void C () { A (0); } void B () { C (); } void A (int tmp){ if (tmp) B (); } int main () { A (1); return 0; } void C () { A (0); } void B () { C (); } void A (int tmp){ if (tmp) B (); } int main () { A (1); return 0; } 0x x x x804838c Code Stack … SP

const1=3 const2=0 const1=3 const2=0 main The stack and recursion bnd=3 RA=0x804838c bnd=3 RA=0x804838c A bnd=2 RA=0x bnd=2 RA=0x A bnd=1 RA=0x bnd=1 RA=0x A bnd=0 RA=0x bnd=0 RA=0x A 0xfffffff 0x0 Memory void A (int bnd){ if (bnd) A (bnd-1); } int main () { A (3); return 0; } void A (int bnd){ if (bnd) A (bnd-1); } int main () { A (3); return 0; } 0x x804838c Code Stack … SP How can recursion go wrong? Can overflow the stack … Keep adding frame after frame

wrd[3] wrd[2] wrd[1] wrd[0] const2=0 wrd[3] wrd[2] wrd[1] wrd[0] const2=0 main The stack and buffer overflows b= 0x00234 RA=0x804838c b= 0x00234 RA=0x804838c cap 0xfffffff 0x0 Memory void cap (char* b){ for (int i=0; b[i]!=‘\0’; i++) b[i]+=32; } int main(char*arg) { char wrd[4]; strcpy(arg, wrd); cap (wrd); return 0; } void cap (char* b){ for (int i=0; b[i]!=‘\0’; i++) b[i]+=32; } int main(char*arg) { char wrd[4]; strcpy(arg, wrd); cap (wrd); return 0; } 0x x804838c Code Stack … SP 0x00234 What can go wrong? Can overflow wrd variable … Overwrite cap’s RA

What is missing?  What process state isn’t in the address space?  Registers  Program counter (PC)  General purpose registers  Review 104 for more details

Multiple threads in an addr space  Several actors on a single set  Sometimes they interact (speak, dance)  Sometimes they are apart (different scenes)

Private vs global thread state  What state is private to each thread?  PC (where actor is in his/her script)  Stack, SP (actor’s mindset)  What state is shared?  Global variables, heap  (props on set)  Code (like lines of a play)

Looking ahead: concurrency  Concurrency  Having multiple threads active at one time  Thread is the unit of concurrency  Primary topics  How threads cooperate on a single task  How multiple threads can share the CPU  Subject of Project 1

Looking ahead: address spaces  Address space  Unit of “state partitioning”  Primary topics  Many addr spaces sharing physical memory  Efficiency  Safety (protection)  Subject of Project 2

Course administration  CS account requests are out  Admins should contact you w/i the next day or so  Project 0 due 9/7, groups due Friday  Post questions to the blackboard message board  Once I have your group + you have your CS account, you can submit  Project 1: The Big One  Will be due in about a month  Discussion section  F (2:50-4:05)  Any other questions?

Thread independence  Ideal decomposition of tasks:  Tasks are completely independent  Remember our earlier definition of independence  Is such a pure abstraction really feasible?  Word saves a pdf, starts acroread, which reads the pdf?  Running mp3 player, while compiling 110 project?  Sharing creates dependencies  Software resources (file, address space)  Hardware resources (CPU, monitor, keyboard)

True thread independence  What would pure independence actually look like?  (system with no shared software, hardware resources)  Multiple computer systems  Each running non-interacting programs  Technically still share the power grid …  “Pure” independence is infeasible  Tension between software dependencies,“features”  Key question: is the thread abstraction still useful?  Easier to have one thread with multiple responsibilities?

Consider a web server  One processor  Multiple disks  Tasks  Receives multiple, simultaneous requests  Reads web pages from disk  Returns on-disk files to requester

Web server (single thread)  Option 1: could handle requests serially  Easy to program, but painfully slow (why?) Client 1Client 2 WS R1 arrives Receive R1 R2 arrives Disk request 1a 1a completes R1 completes Receive R2

Web server (event-driven)  Option 2: use asynchronous I/O  Fast, but hard to program (why?) Client 1 Disk WS R1 arrives Receive R1 Disk request 1a 1a completes R1 completes Receive R2 Client 2 R2 arrives Finish 1a Start 1a

Web server (multi-threaded)  Option 3: assign one thread per request  Where is each request’s state stored? Client 1Client 2WS1 R1 arrives Receive R1 R2 arrives Disk request 1a 1a completes R1 completes Receive R2 WS2

Threads are useful  It cannot provide total independence  But it is still a useful abstraction!  Threads make concurrent programming easier  Thread system manages sharing the CPU  (unlike in event-driven case)  Apps can encapsulate task state w/i a thread  (e.g. web request state)

Where are threads used?  When a resource is slow, don’t want to wait on it  Windowing system  One thread per window, waiting for window input  What is slow?  Human input, mouse, keyboard  Network file/web/DB server  One thread per incoming request  What is slow?  Network, disk, remote user (e.g. ATM bank customer)

Where are threads used?  When a resource is slow, don’t want to wait on it  Operating system kernel  One thread waits for keyboard input  One thread waits for mouse input  One thread writes to the display  One thread writes to the printer  One thread receives data from the network card  One thread per disk …  Just about everything except the CPU is slow

Cooperating threads  Assume each thread has its own CPU  We will relax this assumption later  CPUs run at unpredictable speeds  Source of non-determinism Memory CPU Thread A CPU Thread B CPU Thread C

Non-determinism and ordering Time Thread A Thread B Thread C Global ordering Why do we care about the global ordering? Might have dependencies between events Different orderings can produce different results Why is this ordering unpredictable? Can’t predict how fast processors will run

Non-determinism example 1  Thread A: cout << “ABC”;  Thread B: cout << “123”;  Possible outputs?  “A1BC23”, “ABC123”, …  Impossible outputs? Why?  “321CBA”, “B12C3A”, …  What is shared between threads?  Screen, maybe the output buffer

Non-determinism example 2  y=10;  Thread A: int x = y+1;  Thread B: y = y*2;  Possible results?  A goes first: x = 11 and y = 20  B goes first: y = 20 and x = 21  What is shared between threads?  Variable y

Non-determinism example 3  x=0;  Thread A: x = 1;  Thread B: x = 2;  Possible results?  B goes first: x = 1  A goes first: x = 2  Is x = 3 possible?

Example 3, continued  What if “ x = ; ” is implemented as  x := x & 0  x := x |  Consider this schedule  Thread A: x := x & 0  Thread B: x := x & 0  Thread B: x := x | 1  Thread A: x := x | 2

Atomic operations  Must know what operations are atomic  before we can reason about cooperation  Atomic  Indivisible  Happens without interruption  Between start and end of atomic action  No events from other threads can occur

Review of examples  Print example (ABC, 123)  What did we assume was atomic?  What if “print” is atomic?  What if printing a char was not atomic?  Arithmetic example ( x=y+1, y=y*2 )  What did we assume was atomic?

Atomicity in practice  On most machines  Memory assignment/reference is atomic  E.g.: a=1, a=b  Many other instructions are not atomic  E.g.: double-precision floating point store  (often involves two memory operations)

Virtual/physical interfaces Hardware OS Applications If you don’t have atomic operations, you can’t make one.

Another example  Two threads (A and B)  A tries to increment i  B tries to decrement i Thread A: i = o; while (i < 10){ i++; } print “A done.” Thread B: i = o; while (i > -10){ i--; } print “B done.”

Example continued  Who wins?  Does someone have to win? Thread A: i = o; while (i < 10){ i++; } print “A done.” Thread B: i = o; while (i > -10){ i--; } print “B done.”

Example continued  Will it go on forever if both threads  Start at about the same time  And execute at exactly the same speed?  Yes, if each C statement is atomic. Thread A: i = o; while (i < 10){ i++; } print “A done.” Thread B: i = o; while (i > -10){ i--; } print “B done.”

Example continued  What if i++/i-- are not atomic?  tmp := i+1  i := tmp  ( tmp is private to A and B)

Example continued  Non-atomic i++/i--  If A starts ½ statement ahead, B can win  How? Thread A: tmpA := i + 1 // tmpA == 1 Thread B: tmpB := i - 1 // tmpB == -1 Thread A: i := tmpA // i == 1 Thread B: i := tmpB // i == -1

Example continued  Non-atomic i++/i--  If A starts ½ statement ahead, B can win  How?  Do you need to worry about this?  Yes!!! No matter how unlikely

Debugging non-determinism  Requires worst-case reasoning  Eliminate all ways for program to break  Debugging is hard  Can’t test all possible interleavings  Bugs may only happen sometimes  Heisenbug  Re-running program may make the bug disappear  Doesn’t mean it isn’t still there!

Constraining concurrency  Synchronization  Controlling thread interleavings  Some events are independent  No shared state  Relative order of these events don’t matter  Other events are dependent  Output of one can be input to another  Their order can affect program results

Goals of synchronization 1.All interleavings must give correct result  Correct concurrent program  Works no matter how fast threads run  Important for your projects! 2.Constrain program as little as possible  Why?  Constraints slow program down  Constraints create complexity

Conclusion  Next class: more cooperation  “How do actors interact on stage?”  Start Project 0  Simple, designed to help you with C++