CSC 552.201 - Advanced Unix Programming, Fall, 2008 Monday, December 1 Thread local storage, POSIX:SEM semaphores and POSIX:XSI IPC.

Slides:



Advertisements
Similar presentations
Florida State UniversityCOP5570 – Advanced Unix Programming IPC mechanisms Pipes Sockets System V IPC –Message Queues –Semaphores –Shared Memory.
Advertisements

Inter-Process Communication: Message Passing Tore Larsen Slides by T. Plagemann, Pål Halvorsen, Kai Li, and Andrew S. Tanenbaum.
XSI IPC Message Queues Semaphores Shared Memory. XSI IPC Each XSI IPC structure has two ways to identify it An internal (within the Kernel) non negative.
System V IPC (InterProcess Communication) Messages Queue, Shared Memory, and Semaphores.
Critical Sections and Semaphores A critical section is code that contains access to shared resources that can accessed by multiple processes. Critical.
1 Processes Professor Jennifer Rexford
1 Processes and Pipes COS 217 Professor Jennifer Rexford.
Sockets Basics Conectionless Protocol. Today IPC Sockets Basic functions Handed code Q & A.
Precept 3 COS 461. Concurrency is Useful Multi Processor/Core Multiple Inputs Don’t wait on slow devices.
Fork Fork is used to create a child process. Most network servers under Unix are written this way Concurrent server: parent accepts the connection, forks.
Inter Process Communication:  It is an essential aspect of process management. By allowing processes to communicate with each other: 1.We can synchronize.
Concurrency: Mutual Exclusion, Synchronization, Deadlock, and Starvation in Representative Operating Systems.
Threads© Dr. Ayman Abdel-Hamid, CS4254 Spring CS4254 Computer Network Architecture and Programming Dr. Ayman A. Abdel-Hamid Computer Science Department.
CS 241 Section Week #4 (2/19/09). Topics This Section  SMP2 Review  SMP3 Forward  Semaphores  Problems  Recap of Classical Synchronization Problems.
CS162B: Semaphores (and Shared Memory) Jacob T. Chan.
Inter-Process Communication Mechanisms CSE331 Operating Systems Design.
Introduction to Threads CS240 Programming in C. Introduction to Threads A thread is a path execution By default, a C/C++ program has one thread called.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Processes & Threads Emery Berger and Mark Corner University.
S -1 Shared Memory. S -2 Motivation Shared memory allows two or more processes to share a given region of memory -- this is the fastest form of IPC because.
2.3 InterProcess Communication (IPC)
1 Confidential Enterprise Solutions Group Process and Threads.
June-Hyun, Moon Computer Communications LAB., Kwangwoon University Chapter 26 - Threads.
CS345 Operating Systems Threads Assignment 3. Process vs. Thread process: an address space with 1 or more threads executing within that address space,
Copyright ©: Nahrstedt, Angrave, Abdelzaher1 Semaphore and Mutex Operations.
Linux Programming –Threads CS Threads Review Threads in the same address space –share everything in the address space –lighter than process –no.
1 Announcements The fixing the bug part of Lab 4’s assignment 2 is now considered extra credit. Comments for the code should be on the parts you wrote.
CS162B: Pipes Jacob T. Chan. Pipes  These allow output of one process to be the input of another process  One of the oldest and most basic forms of.
Florida State UniversityCOP5570 – Advanced Unix Programming Today’s topics System V Interprocess communication (IPC) mechanisms –Message Queues –Semaphores.
1 Shared Memory. 2  Introduction  Creating a Shared Memory Segment  Shared Memory Control  Shared Memory Operations  Using a File as Shared Memory.
Source: Operating System Concepts by Silberschatz, Galvin and Gagne.
CE Operating Systems Lecture 13 Linux/Unix interprocess communication.
CSC Advanced Unix Programming, Fall, 2008 Monday, November 24 POSIX threads (pthreads)
Threads Chapter 26. Threads Light-weight processes Each process can have multiple threads of concurrent control. What’s wrong with processes? fork() is.
1 Pthread Programming CIS450 Winter 2003 Professor Jinhua Guo.
Processes CS 6560: Operating Systems Design. 2 Von Neuman Model Both text (program) and data reside in memory Execution cycle Fetch instruction Decode.
2.3 interprocess communcation (IPC) (especially via shared memory & controlling access to it)
UNIX IPC CSC345.
Interprocess Communication Anonymous Pipes Named Pipes (FIFOs) popen() / pclose()
Advanced UNIX IPC Facilities After Haviland, et al.’s book.
Interprocess Communication
4061 Session 13 (2/27). Today Pipes and FIFOs Today’s Objectives Understand the concept of IPC Understand the purpose of anonymous and named pipes Describe.
Message Queues. Unix IPC Package ● Unix System V IPC package consists of three things – Messages – allows processes to send formatted data streams to.
Shared Memory Dr. Yingwu Zhu. Overview System V shared memory Let multiple processes attach a segment of physical memory to their virtual address spaces,
Architecture of a Proactive Security Tool Vivek Ramachandran.
ECE 297 Concurrent Servers Process, fork & threads ECE 297.
Web Server Architecture Client Main Thread for(j=0;j
CS 311/350/550 Semaphores. Semaphores – General Idea Allows two or more concurrent threads to coordinate through signaling/waiting Has four main operations.
Case Study: Pthread Synchronization Dr. Yingwu Zhu.
Distributed and Parallel Processing George Wells.
Chapter 3: Windows7 Part 5.
Operating Systems Review ENCE 360.
Chapter 3: Process Concept
Process Synchronization
Threads Threads.
CS399 New Beginnings Jonathan Walpole.
Chapter 3: Processes.
INTER-PROCESS COMMUNICATION
CS 3733 Operating Systems Topics: IPC and Unix Special Files
Threads and Cooperation
Unix IPC Unix has three major IPC constructs to facilitate interaction between processes: Message Queues (this PowerPoint document) permit exchange of.
Chapter 3: Windows7 Part 5.
Shared Memory Dr. Yingwu Zhu.
Interprocess Communication
Synchronization and Semaphores
Unix programming Term: Unit-V I PPT Slides
IPC Prof. Ikjun Yeom TA – Hoyoun
Chapter 3: Processes.
Synchronization.
Synchronization.
CSE 451 Section 1/27/2000.
Presentation transcript:

CSC Advanced Unix Programming, Fall, 2008 Monday, December 1 Thread local storage, POSIX:SEM semaphores and POSIX:XSI IPC

Thread local data int pthread_key_create(pthread_key_t *key, void (*destructor, void*)); This function creates a thread-specific data key visible to all threads in the process. Key values provided by pthread_key_create() are opaque objects used to locate thread-specific data. Although the same key value may be used by different threads, the values bound to the key by pthread_setspecific() are maintained on a per-thread basis and persist for the life of the calling thread. Destructor may be NULL. If not, it is called for the key’s value upon thread exit. The key acts like a hash index to access thread-local data.

Thread local data access int pthread_setspecific(pthread_key_t key, const void *value); Sets thread-local data for key to pointer value. value points to a valid object that persists across calls. void *pthread_getspecific(pthread_key_t key); Returns the key’s value in this thread, or NULL if the key has not been set in this thread. int pthread_key_delete(pthread_key_t key); Deletes the mapping in this thread. Used for distinct mappings in multiple threads.

Thread local storage example A multithreaded database access API maintains per-thread caches for thread-local pre-commit and pre-abort database fetch and store operations. Each database read() fetches the DBID -> value mappings for its thread. Other threads might read() or write() the same DBID prior to commit() or abort(). Each thread must maintain a cache of the DBID -> value mappings it currently sees, along sets of DBIDs that it has created, modified, deleted, or re-created. A thread’s commit() writes/commits all changes back to the DB and empties the thread local cache. An abort() or close() empties the thread local cache.

Thread local storage example A multithreaded database access API maintains per-thread caches for thread-local pre-commit and pre-abort database fetch and store operations. Transaction serviceQueue function, resultQueue, this, DBID DB Transaction resultQueue singleton database thread DB function() call from 1 of N client threads thread local cache : current transaction mappings, deltas, resultQueue

Why use thread-local storage in this DB example? This application uses thread local storage in part because it is middleware, in this case a wrapper that resides between client DB access functions and the underlying DB API calls. It intercepts client calls to the DB access functions because the underlying DB is not robust for multithreaded commit() and abort() calls. This middleware dedicates one worker thread to actually performing underlying DB commits and aborts. DB calls in client threads do not have access to data parameters in the threads’ startup functions.

POSIX:SEM semaphores (Ch. 14) int sem_init(sem_t *sem, int pshared, unsigned int value); ALSO int sem_destroy(sem_t *sem); pshared is 0 for single process, 1 for interprocess value is 0 for a locked semaphore, > 0 for unlocked int sem_wait(sem_t *sem); blocks on a 0 sem_t value, decrements if > 0 or int sem_trywait(sem_t *sem); int sem_post(sem_t *sem); Adds 1 to sem_t if there are no blocked sem_wait callers, unblocks one thread if 1 or more are blocked

~parson/UnixSysProg/semwait 50 sem_t sem_hasspace ; // the buffer has space 51sem_t sem_hascontent ; // the buffer has content 55if ((errcode = sem_init(&sem_hasspace, 0, 1)) == -1) { 59if ((errcode = sem_init(&sem_hascontent, 0, 0)) == -1) { 65(void) sem_destroy(&sem_hasspace); Producer: 77if ((errcode = sem_wait(&(link->sem_hasspace))) == -1) { 89 if ((errcode = sem_post(&(link->sem_hascontent))) == -1) { Consumer: 107 if ((errcode = sem_wait(&(link->sem_hascontent))) == -1) { 130 if ((errcode = sem_post(&(link->sem_hasspace))) == -1) {

Named semaphores can be opened and used across processes. sem_t *sem_open(const char *name, int oflag, /* unsigned long mode, unsigned int value */...); name is a path-like name (not necessarily a file path) with a single leading “/” identifying the semaphore. oflag can be O_CREAT with an O_EXCL option. O_CREAT mode_t values set permissions S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH O_CREAT value parameter sets the initial value Use sem_destroy() as with sem_init().

POSIX:XSI Inter-Process Communication (IPC) (Ch. 15) Semaphore sets, shared memory, and type- tagged message queues (sem, shm, msg). These mechanisms support synchronization and communication among UNIX processes. Unlike pipes and named FIFOs, they do not use file descriptors or select() or poll(). These mechanisms predate UNIX threads, commercially available sockets and POSIX.

~ parson/UnixSysProg/ipcwait Example code uses shared memory and a two- semaphore set to connect one producer process to two consumer processes. – Semaphore sets Multiple, atomically updated semaphores in 1 set ftok(), semget() (to open), semctl(), semop() – Shared memory Select virtual pages of multiple processes map to shared physical address pages. ftok(), shmget() (to open), shmctl(), shmat(), shmdt()

ipcs and ipcrm shell utilities -bash-3.00$ ipcs -p Shared Memory: m x1008ca1 --rw parson faculty Semaphores: s x8ca1 --ra parson faculty -bash-3.00$ pmap :./ipcwait K r-x-- /export/home/faculty/parson/UnixSysProg/ipcwait/ipcwait K rwx-- /export/home/faculty/parson/UnixSysProg/ipcwait/ipcwait... FF K rwxs- [ shmid=0x5b ]

~ parson/UnixSysProg/msgipcwait Example code uses a message queue to connect one producer process to two consumer processes. – Message queues Each queue includes a “long” type tag field that receivers can use to select from among typed messages. ftok(), msgget() (to open), msgctl(), msgsnd(), msgrcv()

Single threaded, multiprocess and multithreaded server loops. Use select() or poll() to monitor all incoming and possibly outgoing fds of interest. In a single-threaded, single-process system, perform time-bounded work, then return to the service loop Especially appropriate for real-time or small footprint, reactive systems Forking or threading could apply to some requests Use accept() or other blocking system call in a server thread to receive service requests. fork() a worker process to perform concurrent work pthread_create worker threads; maintain a thread pool

Programming assignment 4 (modify a copy of assignment 2 or 3) Each chess plugin starts one or more threads to monitor its incoming data streams, blocking on read(), and copying the data stream to a log file and, for gnuchess, to an output stream. Threads reading stdout from gnuchess or pchess must detect moves as in assignment 2. They invoke a callback function that passes the move back to the main thread via a condition variable and a queue of moves. The main thread blocks on this condition variable. There is no select() or poll() loop. Move injection from the main thread to the stdin on a child game (gnuchess or pchess) must use a mutex to protect writing into the child’s stdin stream, only if there are multiple writers (e.g., xboard to gnuchess). Callbacks signal end-of-file and errors in the child data connections. The main thread must handle signals as in assignment 2. The child threads must mask out all signals.