PROCESS AND THREADS. Three ways for supporting concurrency with socket programming  I/O multiplexing  Select  Epoll: man epoll  Libevent 

Slides:



Advertisements
Similar presentations
Carnegie Mellon 1 Exceptional Control Flow: Exceptions and Processes /18-243: Introduction to Computer Systems 13 th Lecture, June 15, 2011 Instructors:
Advertisements

University of Washington Today Midterm grades posted  76. HW 3 due Lab 4 fun! 1.
Carnegie Mellon 1 Concurrent Programming / : Introduction to Computer Systems 23 rd Lecture, Nov. 15, 2012 Instructors: Dave O’Hallaron, Greg.
University of Washington Today Finish up cache-aware programming example Processes  So we can move on to virtual memory 1.
University of Washington Today Move on to … 1. University of Washington Hmm… Lets look at this again Disk Main Memory L2 unified cache L1 I-cache L1 D-cache.
Processes Topics Process context switches Creating and destroying processes CS 105 “Tour of the Black Holes of Computing!”
Lecture 4: Concurrency and Threads CS 170 T Yang, 2015 Chapter 4 of AD textbook.
– 1 – , F’02 Traditional View of a Process Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program.
1 Processes and Pipes COS 217 Professor Jennifer Rexford.
Exceptional Control Flow Part I September 22, 2008 Topics Exceptions Process context switches Creating and destroying processes class11.ppt ,
Concurrent Programming November 28, 2007 Topics Limitations of iterative servers Process-based concurrent servers Event-based concurrent servers Threads-based.
Exceptional Control Flow Part I Topics Exceptions Process context switches Creating and destroying processes class14.ppt CS 123.
Exceptional Control Flow Part I Topics Exceptions Process context switches Creating and destroying processes.
Concurrent HTTP Proxy with Caching Ashwin Bharambe Monday, Dec 4, 2006.
1 Program and OS interactions: Exceptions and Processes Andrew Case Slides adapted from Jinyang Li, Randy Bryant and Dave O’Hallaron.
Carnegie Mellon Exceptions and Processes Slides adapted from: Gregory Kesden and Markus Püschel of Carnegie Mellon University.
Fabián E. Bustamante, Spring 2007 Exceptional Control Flow Part I Today Exceptions Process context switches Creating and destroying processes Next time.
1 Concurrent Programming Andrew Case Slides adapted from Mohamed Zahran, Jinyang Li, Clark Barrett, Randy Bryant and Dave O’Hallaron.
Exceptional Control Flow Part I March 4, 2004 Topics Exceptions Process context switches Creating and destroying processes class16.ppt “The course.
Concurrent Servers April 25, 2002 Topics Limitations of iterative servers Process-based concurrent servers Threads-based concurrent servers Event-based.
Processes Topics Process context switches Creating and destroying processes CS 105 “Tour of the Black Holes of Computing!”
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
Carnegie Mellon 1 Exceptional Control Flow: Exceptions and Processes : Introduction to Computer Systems 13 th Lecture, Oct. 7, 2014 Instructors:
1 Concurrent Programming. 2 Outline Concurrent programming –Processes –Threads Suggested reading: –12.1, 12.3.
Processes Topics Process context switches Creating and destroying processes CS 105 “Tour of the Black Holes of Computing!”
Carnegie Mellon 1 Bryant and O’Hallaron, Computer Systems: A Programmer’s Perspective, Third Edition Concurrent Programming : Introduction to Computer.
University of Amsterdam Computer Systems – Processes Arnoud Visser 1 Computer Systems Processes.
CS333 Intro to Operating Systems Jonathan Walpole.
Exceptional Control Flow Part I Topics Exceptions Process context switches Creating and destroying processes class16.ppt.
Week 16 (April 25 th ) Outline Thread Synchronization Lab 7: part 2 & 3 TA evaluation form Reminders Lab 7: due this Thursday Final review session Final.
University of Washington Roadmap 1 car *c = malloc(sizeof(car)); c->miles = 100; c->gals = 17; float mpg = get_mpg(c); free(c); Car c = new Car(); c.setMiles(100);
Exceptional Control Flow I March 12, 2002
Thread S04, Recitation, Section A Thread Memory Model Thread Interfaces (System Calls) Thread Safety (Pitfalls of Using Thread) Racing Semaphore.
Carnegie Mellon 1 Concurrent Programming / : Introduction to Computer Systems 22 nd Lecture, Nov. 11, 2014 Instructors: Greg Ganger, Greg.
Concurrency I: Threads Nov 9, 2000 Topics Thread concept Posix threads (Pthreads) interface Linux Pthreads implementation Concurrent execution Sharing.
Carnegie Mellon Exceptions and Processes Slides adapted from: Gregory Kesden and Markus Püschel of Carnegie Mellon University.
1 Exceptional Control Flow Ⅱ. 2 Outline Kernel Mode and User Mode Process context switches Three States of Processes Context Switch System Calls and Error.
Concurrency Threads Synchronization Thread-safety of Library Functions Anubhav Gupta Dec. 2, 2002 Outline.
Carnegie Mellon Exceptions and Processes Slides adapted from: Gregory Kesden and Markus Püschel of Carnegie Mellon University.
Overview of today’s lecture Re-view of the previous lecture Process management models and state machines (cont’d from the previous lecture) What is in.
Carnegie Mellon Recitation 11: ProxyLab Fall 2009 November 16, 2009 Including Material from Previous Semesters' Recitations.
Instructor: Haryadi Gunawi
Section 8: Processes What is a process Creating processes Fork-Exec
Exceptional Control Flow & Processes
Threads Synchronization Thread-safety of Library Functions
CS399 New Beginnings Jonathan Walpole.
Concurrent Programming November 13, 2008
Exceptional Control Flow Part I
Instructors: Anthony Rowe, Seth Goldstein and Gregory Kesden
CS703 – Advanced Operating Systems
Concurrency I: Threads April 10, 2001
Concurrent Servers Topics Limitations of iterative servers
Processes CSE 351 Autumn 2016 Instructor: Justin Hsia
CS 105 “Tour of the Black Holes of Computing!”
Exceptional Control Flow Part I Mar. 11, 2003
Exceptional Control Flow Part I
CS 105 “Tour of the Black Holes of Computing!”
Processes Prof. Ikjun Yeom TA – Mugyo
CS 105 “Tour of the Black Holes of Computing!”
CS 105 “Tour of the Black Holes of Computing!”
Concurrent Programming : Introduction to Computer Systems 23rd Lecture, Nov. 13, 2018
Concurrent Programming CSCI 380: Operating Systems
CS 105 “Tour of the Black Holes of Computing!”
Concurrent Programming November 13, 2008
Concurrent Programming December 2, 2004
Exceptional Control Flow & Processes October 2, 2008
Lecture 19: Threads CS April 3, 2019.
Instructor: Brian Railing
Chapter 3 Process Description and Control
Concurrent Programming
Presentation transcript:

PROCESS AND THREADS

Three ways for supporting concurrency with socket programming  I/O multiplexing  Select  Epoll: man epoll  Libevent  kqueue.html kqueue.html  Process  Thread

Inefficiencies in select()  Since the bitmaps are overwritten by the kernel, user applications should refill the interest sets for every call.  The user application and the kernel should scan the entire bitmaps for every call, to figure out what file descriptors belong to the interest sets and the result sets. This is especially inefficient for the result sets, since they are likely to be very sparse (i.e., only a few of file descriptors are ready at a given time). 

Approaches to Concurrency  Processes  Hard to share resources: Easy to avoid unintended sharing  High overhead in adding/removing clients  Threads  Easy to share resources: Perhaps too easy  Medium overhead  Not much control over scheduling policies  Difficult to debug Event orderings not repeatable  I/O Multiplexing  Tedious and low level  Total control over scheduling  Very low overhead  Cannot create as fine grained a level of concurrency  Does not make use of multi-core

But first,  What is a process?  Some review of the functions we learned (e.g., fork, waitpid)  What is a thread?

Processes  Definition: A process is an instance of a running program.  One of the most profound ideas in computer science  Not the same as “program” or “processor”  Process provides each program with two key abstractions:  Logical control flow Each program seems to have exclusive use of the CPU  Private virtual address space Each program seems to have exclusive use of main memory  How are these Illusions maintained?  Process executions interleaved (multitasking) or run on separate cores  Address spaces managed by virtual memory system

Kernel virtual memory (code, data, heap, stack) Memory mapped region for shared libraries Run-time heap (created by malloc ) User stack (created at runtime) 0 %esp (stack pointer) Memory invisible to user code brk 0x (32) 0x (64) Read/write segment (.data,.bss ) Read-only segment (.init,.text,.rodata ) Loaded from the executable file

Concurrent Processes  Two processes run concurrently (are concurrent) if their flows overlap in time  Otherwise, they are sequential  Examples (running on single core):  Concurrent: A & B, A & C  Sequential: B & C Process AProcess BProcess C Time

User View of Concurrent Processes  Control flows for concurrent processes are physically disjoint in time  However, we can think of concurrent processes are run ning in parallel with each other Process AProcess BProcess C Time

A View of a Process  Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program context: Data registers Stack pointer (SP) Program counter (PC) Kernel context: VM structures Descriptor table brk pointer Code, data, and stack read-only code/data stack SP PC brk Process context

Context Switching  Processes are managed by a shared chunk of OS code called the kernel  Important: the kernel is not a separate process, but rather runs as part of some user process  Control flow passes from one process to another via a context switch Process AProcess B user code kernel code user code kernel code user code context switch Time

fork : Creating New Processes  int fork(void)  creates a new process (child process) that is identical t o the calling process (parent process)  returns 0 to the child process  returns child’s pid to the parent process  Fork is interesting (and often confusing) because it is called once but returns twice pid_t pid = fork(); if (pid == 0) { printf("hello from child\n"); } else { printf("hello from parent\n"); }

Understanding fork pid_t pid = fork(); if (pid == 0) { printf("hello from child\n"); } else { printf("hello from parent\n"); } Process n pid_t pid = fork(); if (pid == 0) { printf("hello from child\n"); } else { printf("hello from parent\n"); } Child Process m pid_t pid = fork(); if (pid == 0) { printf("hello from child\n"); } else { printf("hello from parent\n"); } pid = m pid_t pid = fork(); if (pid == 0) { printf("hello from child\n"); } else { printf("hello from parent\n"); } pid = 0 pid_t pid = fork(); if (pid == 0) { printf("hello from child\n"); } else { printf("hello from parent\n"); } pid_t pid = fork(); if (pid == 0) { printf("hello from child\n"); } else { printf("hello from parent\n"); } hello from parenthello from child Which one is first?

Fork Example #1 void fork1() { int x = 1; pid_t pid = fork(); if (pid == 0) { printf("Child has x = %d\n", ++x); } else { printf("Parent has x = %d\n", --x); } printf("Bye from process %d with x = %d\n", getpid(), x); }  Parent and child both run same code  Distinguish parent from child by return value from fork  Start with same state, but each has private copy  Including shared output file descriptor  Relative ordering of their print statements undefined

Fork Example #2 void fork2() { printf("L0\n"); fork(); printf("L1\n"); fork(); printf("Bye\n"); }  Both parent and child can continue forking L0 L1 Bye

Fork Example #2  Both parent and child can continue forking void fork4() { printf("L0\n"); if (fork() != 0) { printf("L1\n"); if (fork() != 0) { printf("L2\n"); fork(); } printf("Bye\n"); } L0 L1 Bye L2 Bye

exit : Ending a process  void exit(int status)  exits a process Normally return with status 0  atexit() registers functions to be executed upon exit void cleanup(void) { printf("cleaning up\n"); } void fork6() { atexit(cleanup); fork(); exit(0); }

File Descriptors and Descriptor Table fd 0 fd 1 fd 2 fd 3 fd 4 Descriptor table (one table per process) Open file table (shared by all processes) v-node table (shared by all processes) File pos refcnt=1... File pos refcnt=1... stderr stdout stdin File access... File size File type File access... File size File type File A File B

fd 0 fd 1 fd 2 fd 3 fd 4 Descriptor tables Open file table (shared by all processes) v-node table (shared by all processes) File pos refcnt=2... File pos refcnt=2... Parent's table fd 0 fd 1 fd 2 fd 3 fd 4 Child's table File access... File size File type File access... File size File type File A File B After Fork.

fd 0 fd 1 fd 2 fd 3 fd 4 Descriptor table (one table per process) Open file table (shared by all processes) v-node table (shared by all processes) File pos refcnt=1... File pos refcnt=1... File access... File size File type File A File B File sharing: e.g., Opening the same file twice.

Zombies  Idea  When process terminates, still consumes system resources Various tables maintained by OS  Called a “zombie” Living corpse, half alive and half dead  Reaping  Performed by parent on terminated child  Parent is given exit status information  Kernel discards process  What if parent doesn’t reap?  If any parent terminates without reaping a child, then child wi ll be reaped by init process  So, only need explicit reaping in long-running processes e.g., shells and servers

linux>./forks 7 & [1] 6639 Running Parent, PID = 6639 Terminating Child, PID = 6640 linux> ps PID TTY TIME CMD 6585 ttyp9 00:00:00 tcsh 6639 ttyp9 00:00:03 forks 6640 ttyp9 00:00:00 forks 6641 ttyp9 00:00:00 ps linux> kill 6639 [1] Terminated linux> ps PID TTY TIME CMD 6585 ttyp9 00:00:00 tcsh 6642 ttyp9 00:00:00 ps Zombie Example  ps shows child process as “defunct”  Killing parent allows child to be reap ed by init void fork7() { if (fork() == 0) { /* Child */ printf("Terminating Child, PID = %d\n", getpid()); exit(0); } else { printf("Running Parent, PID = %d\n", getpid()); while (1) ; /* Infinite loop */ }

wait : Synchronizing with Children  int wait(int *child_status)  suspends current process until one of its children terminates  return value is the pid of the child process that terminated  if child_status != NULL, then the object it points to w ill be set to a status indicating why the child process terminat ed

wait : Synchronizing with Children void fork9() { int child_status; if (fork() == 0) { printf("HC: hello from child\n"); } else { printf("HP: hello from parent\n"); wait(&child_status); printf("CT: child has terminated\n"); } printf("Bye\n"); exit(); } HP HCBye CTBye

wait() Example  If multiple children completed, will take in arbitrary order  Can use macros WIFEXITED and WEXITSTATUS to get information about exit status void fork10() { pid_t pid[N]; int i; int child_status; for (i = 0; i < N; i++) if ((pid[i] = fork()) == 0) exit(100+i); /* Child */ for (i = 0; i < N; i++) { pid_t wpid = wait(&child_status); if (WIFEXITED(child_status)) printf("Child %d terminated with exit status %d\n", wpid, WEXITSTATUS(child_status)); else printf("Child %d terminate abnormally\n", wpid); }

waitpid() : Waiting for a Specific Process  waitpid(pid, &status, options)  suspends current process until specific process terminate void fork11() { pid_t pid[N]; int i; int child_status; for (i = 0; i < N; i++) if ((pid[i] = fork()) == 0) exit(100+i); /* Child */ for (i = N-1; i >= 0; i--) { pid_t wpid = waitpid(pid[i], &child_status, 0); if (WIFEXITED(child_status)) printf("Child %d terminated with exit status %d\n", wpid, WEXITSTATUS(child_status)); else printf("Child %d terminated abnormally\n", wpid); }

Summary  Processes  At any given time, system has multiple active processes  Only one can execute at a time on a single core, though  Each process appears to have total control of processor + private memory space

Summary (cont.)  Spawning processes  Call fork  One call, two returns  Process completion  Call exit  One call, no return  Reaping and waiting for Processes  Call wait or waitpid

Approaches to Concurrency  Processes  Hard to share resources: Easy to avoid unintended sharing  High overhead in adding/removing clients  Threads  Easy to share resources: Perhaps too easy  Medium overhead  Not much control over scheduling policies  Difficult to debug Event orderings not repeatable  I/O Multiplexing  Tedious and low level  Total control over scheduling  Very low overhead  Cannot create as fine grained a level of concurrency  Does not make use of multi-core

Approach #3: Multiple Threads  Very similar to approach #1 (multiple processes)  but, with threads instead of processes

Traditional View of a Process  Process = process context + code, data, and stack shared libraries run-time heap 0 read/write data Program context: Data registers Condition codes Stack pointer (SP) Program counter (PC) Kernel context: VM structures Descriptor table brk pointer Code, data, and stack read-only code/data stack SP PC brk Process context

Alternate View of a Process  Process = thread + code, data, and kernel context shared libraries run-time heap 0 read/write data Thread context: Data registers Condition codes Stack pointer (SP) Program counter (PC) Code and Data read-only code/data stack SP PC brk Thread (main thread) Kernel context: VM structures Descriptor table brk pointer

A Process With Multiple Threads  Multiple threads can be associated with a process  Each thread has its own logical control flow  Each thread shares the same code, data, and kernel context Share common virtual address space (inc. stacks)  Each thread has its own thread id (TID) shared libraries run-time heap 0 read/write data Thread 1 context: Data registers Condition codes SP1 PC1 Shared code and data read-only code/data stack 1 Thread 1 (main thread) Kernel context: VM structures Descriptor table brk pointer Thread 2 context: Data registers Condition codes SP2 PC2 stack 2 Thread 2 (peer thread)

Logical View of Threads  Threads associated with process form a pool of peers  Unlike processes which form a tree hierarchy P0 P1 sh foo bar T1 Process hierarchy Threads associated with process foo T2 T4 T5 T3 shared code, data and kernel context

Thread Execution  Single Core Processor  Simulate concurrency by time slici ng  Multi-Core Processor  Can have true concurrency Time Thread AThread BThread C Thread AThread BThread C Run 3 threads on 2 cores

Logical Concurrency  Two threads are (logically) concurrent if their flows overlap in time  Otherwise, they are sequential  Examples:  Concurrent: A & B, A&C  Sequential: B & C Time Thread AThread BThread C

Threads vs. Processes  How threads and processes are similar  Each has its own logical control flow  Each can run concurrently with others (possibly on different co res)  Each is context switched  How threads and processes are different  Threads share code and some data Processes (typically) do not  Threads are somewhat less expensive than processes Process control (creating and reaping) is twice as expensive as thre ad control Linux numbers: ~20K cycles to create and reap a process ~10K cycles (or less) to create and reap a thread

Posix Threads (Pthreads) Interface  Pthreads: Standard interface for ~60 functions that manipulate threads f rom C programs  Creating and reaping threads pthread_create() pthread_join()  Determining your thread ID pthread_self()  Terminating threads pthread_cancel() pthread_exit() exit() [terminates all threads], RET [terminates current thread]  Synchronizing access to shared variables pthread_mutex_init pthread_mutex_[un]lock pthread_cond_init pthread_cond_[timed]wait

/* thread routine */ void *thread(void *vargp) { printf("Hello, world!\n"); return NULL; } The Pthreads "hello, world" Program /* * hello.c - Pthreads "hello, world" program */ #include "csapp.h" void *thread(void *vargp); int main() { pthread_t tid; Pthread_create(&tid, NULL, thread, NULL); Pthread_join(tid, NULL); exit(0); } Thread attributes (usually NULL) Thread arguments (void *p) return value (void **p)

Execution of Threaded“hello, world” main thread peer thread return NULL; main thread waits for peer thread to terminate exit() terminates main thread and any peer threads call Pthread_create() call Pthread_join() Pthread_join() returns printf() (peer thread terminates) Pthread_create() returns

Thread-Based Concurrent Echo Server  Spawn new thread for each client  Pass it copy of connection file descriptor  Note use of Malloc()! Without corresponding Free() int main(int argc, char **argv) { int port = atoi(argv[1]); struct sockaddr_in clientaddr; int clientlen=sizeof(clientaddr); pthread_t tid; int listenfd = Open_listenfd(port); while (1) { int *connfdp = Malloc(sizeof(int)); *connfdp = Accept(listenfd, (SA *) &clientaddr, &clientlen); Pthread_create(&tid, NULL, echo_thread, connfdp); }

Thread-Based Concurrent Server (cont)  Run thread in “detached” mode Runs independently of other threads Automatically reaped when it terminates  Free storage allocated to hold clientfd “Producer-Consumer” model /* thread routine */ void *echo_thread(void *vargp) { int connfd = *((int *)vargp); Pthread_detach(pthread_self()); Free(vargp); echo(connfd); Close(connfd); return NULL; }

Threaded Execution Model  Multiple threads within single process  Some state between them File descriptors Client 1 Server Client 2 Server Listening Server Connection Requests Client 1 dataClient 2 data

Potential Form of Unintended Sharing main thread peer 1 while (1) { int connfd = Accept(listenfd, (SA *) &clientaddr, &clientlen); Pthread_create(&tid, NULL, echo_thread, (void *) &connfd); } connfd Main thread stack vargp Peer 1 stack vargp Peer 2 stack peer 2 connfd = connfd 1 connfd = *vargp connfd = connfd 2 connfd = *vargp Race! Why would both copies of vargp point to same location?

Could this race occur?  Race Test  If no race, then each thread would get different value of i  Set of saved values would consist of one copy each of 0 through 99. int i; for (i = 0; i < 100; i++) { Pthread_create(&tid, NULL, thread, &i); } Main void *thread(void *vargp) { int i = *((int *)vargp); Pthread_detach(pthread_self()); save_value(i); return NULL; } Thread

Experimental Results  The race can really happen! No Race Multicore server Single core laptop

Issues With Thread-Based Servers  Must run “detached” to avoid memory leak.  At any point in time, a thread is either joinable or detached.  Joinable thread can be reaped and killed by other threads. must be reaped (with pthread_join ) to free memory resources.  Detached thread cannot be reaped or killed by other threads. resources are automatically reaped on termination.  Default state is joinable. use pthread_detach(pthread_self()) to make detached.  Must be careful to avoid unintended sharing.  For example, passing pointer to main thread’s stack Pthread_creat e(&tid, NULL, thread, (void *)&connfd);  All functions called by a thread must be thread-safe  Stay tuned

Pros and Cons of Thread-Based Designs  + Easy to share data structures between threads  e.g., logging information, file cache.  + Threads are more efficient than processes.  – Unintentional sharing can introduce subtle and hard-to- reproduce errors!  The ease with which data can be shared is both the greatest strength and the greatest weakness of threads.  Hard to know which data shared & which private  Hard to detect by testing Probability of bad race outcome very low But nonzero!  Future lectures