Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 466 – Fall 2000 - Introduction - 1 Implementation of Shared Memory  Considerations  Network traffic due to create/read/write  Latency of create/read/write.

Similar presentations


Presentation on theme: "CSE 466 – Fall 2000 - Introduction - 1 Implementation of Shared Memory  Considerations  Network traffic due to create/read/write  Latency of create/read/write."— Presentation transcript:

1 CSE 466 – Fall 2000 - Introduction - 1 Implementation of Shared Memory  Considerations  Network traffic due to create/read/write  Latency of create/read/write  Synchronization  Network traffic v. Latency  Copy on Read (No local caching)  Local read and write is fast, no traffic  Remote Read is slow, generates traffic (2-way)  Local write is fast, no traffic  Remote write is fast, generates traffic  Copy on Write (local caching)  Local read is fast, no traffic  Remote read is fast, no traffic  Local write is fast, generates traffic  Remote write is fast, generates traffic  Central Server and Single Copy are basically the same. Except that central server does not allow you to optimize the location of the master copy for the process that needs it the most

2 CSE 466 – Fall 2000 - Introduction - 2 Choice of Policy Infrequently: write(X,1); write(X,2); … Frequently read(X) … Frequently read(X) … Copy-on-write: Local caching far less traffic Frequently:: write(X,1); write(X,2); … Infrequently read(X) … Infrequently read(X) … Copy-on-read: no local caching has less traffic

3 CSE 466 – Fall 2000 - Introduction - 3 Choice of place to Store Infrequently: write(X,1); write(X,2); … Frequently read(X) … Frequently read(X) … Copy-on-write: Local caching far less traffic Put the master copy on the most frequent writer Frequently:: write(X,1); write(X,2); … Infrequently read(X) … Infrequently read(X) … Copy-on-read: no local caching has less traffic Put the master copy on the most frequent writer What do to when reading and writing are both frequent? Case 1 Case 2

4 CSE 466 – Fall 2000 - Introduction - 4 Implementation  Message Types  Publish: broadcast, tells everyone location of master copy  Subscribe: to publisher tells publisher location of subscriber  Post: to publisher on a subscriber write  Post, to all subscribers in response to receiving a post (CW)  Update, to publisher sent by subscriber to request a post (CR). Subscriber blocks the app, waiting for the post.  Messages are not functions  It is not the job of the transport layer to guarantee deliver, this is assumed. It is the job of the data link layer  What about order of delivery of these messages?  Is it important? Whose responsibility is it?  Is it good to have subscribers known or should we just broadcast all posts (CW). What is the trade-off here?  How would you implement a test and set operation (CW, CR)?  Could there be race to publish?

5 CSE 466 – Fall 2000 - Introduction - 5 Synchronization  Write atomicity  Order (which is important)  Consistency  Actual time of write

6 CSE 466 – Fall 2000 - Introduction - 6 Synchronization Task 1: for (i = 1; i < N; i++) { x = i; if (x >= N) error(); } Task 2: for (i = 1; i < N-1; i++) { x = i; if (x >= N) error(); } Task 1: for (i = 1; i < N; i++) { lock(); x = i; unlock(); if (x >= N) error(); } … Task 2: for (i = 1; i < N-1; i++) { lock(); x = i; unlock(); if (x >= N) error(); } … Case 1 Case 2 How are they different? What to compare it to: two tasks running in a shared memory space. What would we want the system to guarantee? At a minimum No guarantees about value of x at the end of the loop, but they will eventually agree on the value. no Caching!

7 CSE 466 – Fall 2000 - Introduction - 7 Compare to copy-on-read (no cache) Task 1: for (i = 1; i < N; i++) { write(X, i); if (read(X) >= N) error(); } Task 2: for (i = 1; i < N-1; i++) { write(X,i); if (read(X) >= N) error(); } Case 1 Task 1: for (i = 1; i < N; i++) { lock(); x = i; unlock(); if (x >= N) error(); } Task 2: for (i = 1; i < N-1; i++) { lock(); x = i; unlock(); if (x >= N) error(); } Case 2 Assume Copy-on-read Read blocks. Now, will they eventually agree on the value of X at the end? Is this different in any way than the true shared memory case? Multiprocessor, no shared physical memory

8 CSE 466 – Fall 2000 - Introduction - 8 Network shared memory w/ caching Task 1: for (i = 1; i < N; i++) { write(X, i); if (read(X) >= N) error(); } Task 2: for (i = 1; i < N-1; i++) { write(X,i); if (read(X) >= N) error(); } Case 1 Corruption is not a problem…the actual local assignment takes place in the OS. Guaranteed atomic. Local behavior is determined by the order in which “write” messages are received at each task Will they eventually agree on the value of X? Is it sufficient for the transport layer to just to send all write messages in order? Task 1: for (i = 1; i < N; i++) { lock(); x = i; unlock(); if (x >= N) error(); } Task 2: for (i = 1; i < N-1; i++) { lock(); x = i; unlock(); if (x >= N) error(); } Case 2 Assume Copy-on-write Read returns last Received value on The network.

9 CSE 466 – Fall 2000 - Introduction - 9 Other Ideas  It is not the same as shared memory cache coherency problem. We can send messages to each other  It is not necessary to lock every write  Single writer  Order v. Atomicity  Can use semaphore to protect critical sections  Where does the error handling go…do we need ACK NACK at the transport layer?  Leases (publisher has to renew periodically)  Is Broadcast worse than sending only to list of subscribers?

10 CSE 466 – Fall 2000 - Introduction - 10 Homework  Questions for Friday  Extend the protocol stack to support signal(var) and wait(var) system calls  wait: if address is > 0 decrement and return true If address is <= 0 block (if time out and return false…be careful)  Signal: increment var if signaler is legitimate  Propose a scheme to ensure that data-link transmits messages in order with respect to each receiver

11 CSE 466 – Fall 2000 - Introduction - 11 Implementation Application: Responsible for the application semantics: what does the value of the shared variable mean? Transport: Implements “ shared memory ” interface by using the datalink system to send guaranteed messages between transport layers running on different processors. Might also implement fifos, semaphores, etc. Exports to application: publish(addr); subscribe(addr); post(addr,var); update(addr,&var); Exports to datalink : transport_recv(message); DataLink: Guarantees error free delivery of messages from one transport to another (in order?) using the available physical layer. Can implement a wide variety of retransmit schemes. Exports to transport layer: datalink_send(message); Exports to physical layer: datalink_recv(packet); Physical: Converts a packet into a set of frames for transmission over the bus. Frames are reconstructed and passed back to datalink layer at other end. Knows how to drive the physical bus. Exports to datalink_layer physical_send(packet); Exports to physical layer ISRs to deal with events on the bus such as start, stop, byte transmission

12 CSE 466 – Fall 2000 - Introduction - 12 Example: The Fuel Cell Controller


Download ppt "CSE 466 – Fall 2000 - Introduction - 1 Implementation of Shared Memory  Considerations  Network traffic due to create/read/write  Latency of create/read/write."

Similar presentations


Ads by Google