CompSci 143A1 9. Linking and Sharing 9.1 Single-Copy Sharing –Why Share –Requirements for Sharing –Linking and Sharing 9.2 Sharing in Systems without Virtual.

Slides:



Advertisements
Similar presentations
Operating Systems1 9. Linking and Sharing 9.1 Single-Copy Sharing –Why Share –Requirements for Sharing –Linking and Sharing 9.2 Sharing in Systems without.
Advertisements

Computer Organization CS224 Fall 2012 Lesson 12. Synchronization  Two processors or threads sharing an area of memory l P1 writes, then P2 reads l Data.
Memory Management Chapter 7.
Distributed Shared Memory
Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Fall 2010.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Memory Management (II)
Memory Management 2010.
Distributed Resource Management: Distributed Shared Memory
Computer Organization and Architecture
A. Frank - P. Weisberg Operating Systems Real Memory Management.
CPE 731 Advanced Computer Architecture Snooping Cache Multiprocessors Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
03/05/2008CSCI 315 Operating Systems Design1 Memory Management Notice: The slides for this lecture have been largely based on those accompanying the textbook.
PRASHANTHI NARAYAN NETTEM.
Dr. Kalpakis CMSC 621, Advanced Operating Systems. Fall 2003 URL: Distributed Shared Memory.
Chapter 91 Memory Management Chapter 9   Review of process from source to executable (linking, loading, addressing)   General discussion of memory.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Distributed Shared Memory Systems and Programming
Review of Memory Management, Virtual Memory CS448.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-7 Memory Management (1) Department of Computer Science and Software.
Distributed Shared Memory: A Survey of Issues and Algorithms B,. Nitzberg and V. Lo University of Oregon.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Distributed Shared Memory Steve Ko Computer Sciences and Engineering University at Buffalo.
CIS250 OPERATING SYSTEMS Memory Management Since we share memory, we need to manage it Memory manager only sees the address A program counter value indicates.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
Subject: Operating System.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Memory: Relocation.
Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Ch 10 Shared memory via message passing Problems –Explicit user action needed –Address spaces are distinct –Small Granularity of Transfer Distributed Shared.
Distributed Shared Memory Based on Reference paper: Distributed Shared Memory, Concepts and Systems.
Operating Systems Principles Memory Management Lecture 9: Sharing of Code and Data in Main Memory 主講人:虞台文.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Distributed Shared Memory Presentation by Deepthi Reddy.
1 Memory Management Chapter 7. 2 Memory Management Subdividing memory to accommodate multiple processes Memory needs to be allocated to ensure a reasonable.
Main Memory. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The.
Java Thread and Memory Model
Page 1 Distributed Shared Memory Paul Krzyzanowski Distributed Systems Except as otherwise noted, the content of this presentation.
1 Chapter 9 Distributed Shared Memory. 2 Making the main memory of a cluster of computers look as though it is a single memory with a single address space.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
CS6502 Operating Systems - Dr. J. Garrido Memory Management – Part 1 Class Will Start Momentarily… Lecture 8b CS6502 Operating Systems Dr. Jose M. Garrido.
Distributed shared memory u motivation and the main idea u consistency models F strict and sequential F causal F PRAM and processor F weak and release.
ICS Linking and Sharing 9.1 Single-Copy Sharing –Why Share –Requirements for Sharing 9.2 Static Linking and Sharing –Sharing in Systems w/o Segmentation.
NETW3005 Virtual Memory. Reading For this lecture, you should have read Chapter 9 (Sections 1-7). NETW3005 (Operating Systems) Lecture 08 - Virtual Memory2.
Chapter 7 Memory Management Eighth Edition William Stallings Operating Systems: Internals and Design Principles.
Memory Management Chapter 5 Advanced Operating System.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Memory Management. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
CompSci 143A1 Part II: Memory Management Chapter 7: Physical Memory Chapter 8: Virtual Memory Chapter 9: Sharing Data and Code in Main Memory Spring, 2013.
Distributed Shared Memory
The University of Adelaide, School of Computer Science
Memory Management © 2004, D. J. Foreman.
Ivy Eva Wu.
CSI 400/500 Operating Systems Spring 2009
Chapter 9: Virtual-Memory Management
Machine Independent Features
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Outline Midterm results summary Distributed file systems – continued
Chapter 8: Memory management
Outline Module 1 and 2 dealt with processes, scheduling and synchronization Next two modules will deal with memory and storage Processes require data to.
Process Description and Control
Fast Communication and User Level Parallelism
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Distributed Resource Management: Distributed Shared Memory
Lecture 17 Multiprocessors and Thread-Level Parallelism
Programming with Shared Memory Specifying parallelism
OPERATING SYSTEMS MEMORY MANAGEMENT BY DR.V.R.ELANGOVAN.
The University of Adelaide, School of Computer Science
CSE 542: Operating Systems
Lecture 17 Multiprocessors and Thread-Level Parallelism
Presentation transcript:

CompSci 143A1 9. Linking and Sharing 9.1 Single-Copy Sharing –Why Share –Requirements for Sharing –Linking and Sharing 9.2 Sharing in Systems without Virtual Memory 9.3 Sharing in Paging Systems –Sharing of Data –Sharing of Code 9.3 Sharing in Segmented Systems 9.4 Principles of Distributed Shared Memory (DSM) –The User's View of DSM 9.5 Implementations of DSM –Implementing Unstructured DSM –Implementing Structured DSM Spring, 2013

CompSci 143A2 Single-Copy Sharing Focus: sharing a single copy of code or data in memory Why share? –Processes need to access common data Producer/consumer, task pools, file directories –Better utilization of memory code, system tables, data bases Spring, 2013

CompSci 143A3 Requirements for Sharing Requirement for sharing –How to express what is shared A priori agreement (e.g., system components) Language construct (e.g., UNIX’s shmget/shmat ) –Shared code must be reentrant (also known as read-only or pure) Does not modify itself (read-only segments) Data (stack, heap) in separate private areas for each process Spring, 2013

CompSci 143A4 Linking and Sharing Linking resolves external references Sharing links the same copy of a module into two or more address spaces Static linking/sharing: –Resolve references before execution starts Dynamic linking/sharing: –While executing Figure 9-1 Spring, 2013

CompSci 143A5 Sharing without Virtual Memory With one or no Relocation Register (RR) –All memory of a process is contiguous –Sharing user programs: Possible only with 2 user programs by partial overlap Too restrictive and difficult; generally not used –Sharing system components: Components are assigned specific, agreed-upon starting positions Linker resolves references to those locations Can also use a block of transfer addresses, but this involves additional memory references. Spring, 2013

CompSci 143A6 Sharing without Virtual Memory With multiple RRs –CBR = Code Base Register Point to shared copy of code –SBR = Stack Base Register Point to private copy of stack –DBR = Data Base Register Point to private copy of data Figure 9-2 Spring, 2013

Sharing in Paging Systems Data pages Code pages Generally, want to avoid requiring shared page to have the same page number in all processes that share it –Code, data could be shared by many processes –Could easily lead to conflicts CompSci 143A7 Spring, 2013

CompSci 143A8 Sharing in Paging Systems Sharing of data pages: –Page table entries of different processes point to the same page –If shared pages contain only data and no addresses, linker can Assign arbitrary page numbers to the shared pages Adjust page tables to point to appropriate page frames –So the shared page can have a different page number in different processes Figure 9-3 Spring, 2013

Sharing in Paging Systems Sharing of code pages Key issues: –Self-references: references to the shared code from within the shared code –Linking the shared code into multiple address spaces CompSci 143A9 Spring, 2013

CompSci 143A10 Sharing of Code Pages in Paging Systems Self references: –avoid page numbers in shared code by compiling branch addresses relative to CBR –This works provided the shared code is self-contained (does not contain any external references ) Linking shared pages into multiple address spaces: –Issues: Want to defer loading of code until we actually use it When process first accesses the code, it may have already been loaded by another process –Done through dynamic linking using a transfer vector Figure 9-3 Spring, 2013

CompSci 143A11 Dynamic Linking via Transfer Vector Each Transfer Vector entry corresponds to a reference to shared code Each entry initially contains a piece of code called a stub Stub code does the following: –Checks whether referenced shared code is loaded. –If the shared code is not already loaded, the stub loads the code –Stub code replaces itself by a direct reference to the shared code Figure 9-4 Spring, 2013

CompSci 143A12 Sharing in Segmented Systems Much the same as with Paged Systems Simpler and more elegant because segments represent logical program entities ST entries of different processes point to the same segment in physical memory (PM) Data pages, containing only data and no addresses: same as with paged systems Code pages: –Assign same segment numbers in all STs, or –Use base registers: Function call loads CBR Self-references have the form w(CBR) This works if shared segments are self-contained (i.e., it they do not contain any references to other segments). Full generality can be achieved using private linkage sections, introduced in Multics (1968). Spring, 2013

CompSci 143A13 Unrestricted Dynamic Linking/Sharing Basic Principles (see Figure 9-5 on next page) : –Self-references resolved using CBR –External references are indirect via a private linkage section –External reference is (S,W), where S and W are symbolic names –At runtime, on first use: Symbolic address (S,W) is resolved to (s,w), using trap mechanism) (s,w) is entered in linkage section of process Code is unchanged –Subsequent references use (s,w) without involving OS –Forces additional memory access for every external reference Spring, 2013

CompSci 143A14 Dynamic Linking/Sharing Figure 9-5a: BeforeFigure 9-5b: After Before and After External Reference is Executed Spring, 2013

CompSci 143A15 Distributed Shared Memory Goal: Create illusion of single shared memory in a distributed system The (ugly) reality is that physical memory is distributed. References to remote memory trigger hidden transfers from remote memory to local memory –Impractical/Impossible to do this one reference at a time. How to implement transfers efficiently? –Optimize the implementation. Most important with Unstructured DSM. –Restrict the user. (Exploit what the user knows.) Basic to Structured DSM. Spring, 2013

CompSci 143A16 Unstructured DSM Simulate single, fully shared, unstructured memory. (Unlike paging, a CPU has no private space.) Figure 9-6 –Advantage: Fully transparent to user –Disadvantage: Efficiency. Every instruction fetch or operand read/write could be to remote memory Spring, 2013

CompSci 143A17 Structured DSM Each CPU has both private and shared space. Add restrictions on use of shared variables: –Access only within (explicitly declared) Critical Sections –Modifications only need to be propagated at beginning/end of critical sections. Variant: Use objects instead of shared variables: object-based DSM Figure 9-7 Spring, 2013

CompSci 143A18 Implementing Unstructured DSM Key Issues: –Granularity of Transfers –Replication of Data –Memory Consistency: Strict vs Sequential –Tracking Data: Where is it stored now? Spring, 2013

CompSci 143A19 Implementing Unstructured DSM Granularity of Transfers –Transfer too little: Time wasted in latency –Transfer too much: Time wasted in transfer False sharing: –Two unrelated variables, each accessed by a different process, are on the same page/set of pages being transferred between physical memories –Can result in pages being transferred back and forth, similar to thrashing Spring, 2013

CompSci 143A20 Implementing Unstructured DSM Replication of Data: Move or Copy? –Copying saves time on later references. –Copying causes (cache or real) consistency confusion. Reads work fine. Writes require others to update or invalidate. Figure 9-9 Spring, 2013

CompSci 143A21 Implementing Unstructured DSM Strict Consistency: Reading a variable x returns the value written to x by the most recently executed write operation. Sequential Consistency: Sequence of values of x read by different processes corresponds to some sequential interleaved execution of those processes. Figure 9-11 – Reads of x in p1 will always produce (1,2) – Reads of x in p2 can produce (0,0), (0,1), (0,2), (1,1), (1,2), or (2,2) Spring, 2013

CompSci 143A22 Implementing Unstructured DSM Tracking Data: Where is it stored now? Approaches: –Have owner track it by maintaining copy set (list). Only owner is allowed to write. –Ownership can change when a write request is received. Now we need to find the owner. Use broadcast. Central Manager (→ Bottleneck). Replicated managers share responsibilities. Probable owner gets tracked down. Retrace data’s migration. Update links traversed to show current owner. Bottom line on Unstructured DSM: –Isn’t there a better way? Spring, 2013

CompSci 143A23 Implementing Structured DSM Memory Consistency –Unstructured DSM assume that all shared variables are consistent at all times. This is a major reason why the performance is so poor. –Structured DSM introduces new, weaker models of memory consistency Weak consistency Release consistency Entry consistency Spring, 2013

CompSci 143A24 Implementing Structured DSM Weak Memory Consistency –Introduce synchronization variable S –Processes access S when they are ready to adjust/reconcile their shared variables. –The DSM is only guaranteed to be in a consistent state immediately following access to a synchronization variable Figure 9-12 Spring, 2013

CompSci 143A25 Implementing Structured DSM Release Memory Consistency –Export modified variables upon leaving CS Figure 9-13 – This is a waste if p2 never looks at x. Spring, 2013

CompSci 143A26 Implementing Structured DSM Entry Memory Consistency –Associate each shared variable with a lock variable –Before entering CS, import only those variables associated with the current lock Figure 9-14 –There is also a (confusingly named?) lazy release consistency model which imports all shared variables before entering CS Spring, 2013

CompSci 143A27 Object-Based DSM An object’s functions/methods are part of it. Can use remote method invocation (like remote procedure calls, covered earlier) instead of copying or moving an object into local memory. Can also move or copy an object to improve performance. When objects are replicated, consistency issues again arise (as in unstructured DSM) On write, we could –Invalidate all other copies (as in unstructured DSM) –Remotely invoke, on all copies, a method that does the same write Spring, 2013

CompSci 143A28 Memory Models on Multiprocessors Processors share memory Each processor may have its own cache Memory models provide rules for deciding –When processor X sees writes to memory by other processors –When writes by processor X are visible to other processors These questions are similar to some of the issues that arise in distributed shared memory. Spring, 2013

CompSci 143A29 Java Memory Model Similar issues arise in multithreaded code in Java Each thread may have its own copy of shared variables Threads may read from and write to their own copy of shared variables. The Java Memory Model specifies –When thread X must see writes to memory by other processors –When writes by thread X must become visible to other processors The issues in Java are different from those in other languages such as C/C++: –Threads are an integral part of the Java language. –Java compilers can rearrange thread code as part of optimization –To achieve correctness, certain conditions must be guaranteed. Spring, 2013

CompSci 143A30 Java Memory Model (continued) Full details in JSR 133 (2004). A happened-before relation is defined on memory references, locks, unlocks, and other thread operation. If one action happened-before the other according to this definition, then the Java Virtual Machine guarantees that the results of the first action are visible to the second action Example: –If x=1 happened-before y=x and no other assignment to x intervenes, then y must be set to 1. –But if it is not true that x=1 happened-before y=x, then y will not necessarily be set to 1. Note that this is a separate issue from mutual exclusion, although the two are related. Spring, 2013

CompSci 143A31 Java Memory Model (continued) Some rules defining the happened before relation (not a complete list): –An action in a thread happened-before an action in that thread that comes later in the thread’s sequential order. –An unlock on an object happened-before every subsequent lock on that same object. –A write to a volatile field happened-before every subsequent read of that same volatile field. –A call to start() on a thread happened-before any actions within the thread. –All actions within a thread happened-before any other thread returns from a join() on that thread. –A write by a thread to a blocking queue happened-before any subsequent read from that blocking queue. There are other rules. The compiler is free to reorder operations as long as the happened-before operation is respected. Spring, 2013

CompSci 143A32 History Originally developed by Steve Franklin Modified by Michael Dillencourt, Summer, 2007 Modified by Michael Dillencourt, Spring, 2009 Modified by Michael Dillencourt, Winter 2011 (added material on Java memory model) Spring, 2013