Atomic Operations in Hardware

Slides:



Advertisements
Similar presentations
Symmetric Multiprocessors: Synchronization and Sequential Consistency.
Advertisements

Synchronization. How to synchronize processes? – Need to protect access to shared data to avoid problems like race conditions – Typical example: Updating.
CS492B Analysis of Concurrent Programs Lock Basics Jaehyuk Huh Computer Science, KAIST.
Chapter 2 — Instructions: Language of the Computer — 1 Branching Far Away If branch target is too far to encode with 16-bit offset, assembler rewrites.
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
1 Mutual Exclusion: Primitives and Implementation Considerations.
Review: Multiprocessor Systems (MIMD)
PARALLEL PROGRAMMING with TRANSACTIONAL MEMORY Pratibha Kona.
Threading Part 2 CS221 – 4/22/09. Where We Left Off Simple Threads Program: – Start a worker thread from the Main thread – Worker thread prints messages.
CS510 Concurrent Systems Class 2 A Lock-Free Multiprocessor OS Kernel.
CPS110: Implementing threads/locks on a uni-processor Landon Cox.
Memory Consistency Models Some material borrowed from Sarita Adve’s (UIUC) tutorial on memory consistency models.
Implementing Synchronization. Synchronization 101 Synchronization constrains the set of possible interleavings: Threads “agree” to stay out of each other’s.
COMP 111 Threads and concurrency Sept 28, Tufts University Computer Science2 Who is this guy? I am not Prof. Couch Obvious? Sam Guyer New assistant.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Mutual Exclusion.
University of Washington What is parallel processing? Spring 2014 Wrap-up When can we execute things in parallel? Parallelism: Use extra resources to solve.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
1. 2 Pipelining vs. Parallel processing  In both cases, multiple “things” processed by multiple “functional units” Pipelining: each thing is broken into.
Threads Tutorial #7 CPSC 261. A thread is a virtual processor Each thread is provided the illusion that it owns a core – Copy of the registers – It is.
December 1, 2006©2006 Craig Zilles1 Threads and Cache Coherence in Hardware  Previously, we introduced multi-cores. —Today we’ll look at issues related.
CS399 New Beginnings Jonathan Walpole. 2 Concurrent Programming & Synchronization Primitives.
Implementing Lock. From the Previous Lecture  The “too much milk” example shows that writing concurrent programs directly with load and store instructions.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Parallel Processing Chapter 9. Problem: –Branches, cache misses, dependencies limit the (Instruction Level Parallelism) ILP available Solution:
Implementing Mutual Exclusion Andy Wang Operating Systems COP 4610 / CGS 5765.
Lecture 5 Page 1 CS 111 Summer 2013 Bounded Buffers A higher level abstraction than shared domains or simple messages But not quite as high level as RPC.
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Multiprocessors – Locks
Kernel Synchronization David Ferry, Chris Gill CSE 522S - Advanced Operating Systems Washington University in St. Louis St. Louis, MO
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Outline Introduction Centralized shared-memory architectures (Sec. 5.2) Distributed shared-memory and directory-based coherence (Sec. 5.4) Synchronization:
Parallelism and Concurrency
CS703 – Advanced Operating Systems
Background on the need for Synchronization
© Craig Zilles (adapted from slides by Howard Huang)
Memory Consistency Models
Atomic Operations in Hardware
Parallel Shared Memory
Memory Consistency Models
Exploiting Parallelism
143a discussion session week 3
The University of Adelaide, School of Computer Science
Lecture 11: Mutual Exclusion
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Designing Parallel Algorithms (Synchronization)
Chapter 26 Concurrency and Thread
Symmetric Multiprocessors: Synchronization and Sequential Consistency
Jonathan Walpole Computer Science Portland State University
Lecture 21: Synchronization and Consistency
Mutual Exclusion.
Lecture: Coherence and Synchronization
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Background and Motivation
Coordination Lecture 5.
Grades.
Implementing Mutual Exclusion
CSCI1600: Embedded and Real Time Software
Implementing Mutual Exclusion
Memory Consistency Models
Kernel Synchronization II
CSE 451: Operating Systems Autumn 2003 Lecture 7 Synchronization
CSE 451: Operating Systems Autumn 2005 Lecture 7 Synchronization
CSE 153 Design of Operating Systems Winter 19
CS333 Intro to Operating Systems
Chapter 6: Synchronization Tools
CSCI1600: Embedded and Real Time Software
Lecture 20: Synchronization
© Craig Zilles (adapted from slides by Howard Huang)
Process/Thread Synchronization (Part 2)
Threads CSE451 Andrew Whitaker TODO: print handouts for AspectRatio.
Presentation transcript:

Atomic Operations in Hardware Previously, we introduced multi-core parallelism & cache coherence Today we’ll look at instruction support for synchronization. And some pitfalls of parallelization. And solve a few mysteries. Intel Core i7 December 1, 2006 ©2006 Craig Zilles

A simple piece of code How long does this program take? unsigned counter = 0; void *do_stuff(void * arg) { for (int i = 0 ; i < 200000000 ; ++ i) { counter ++; } return arg; How long does this program take? How can we make it faster? adds one to counter December 1, 2006 ©2006 Craig Zilles

Exploiting a multi-core processor unsigned counter = 0; void *do_stuff(void * arg) { for (int i = 0 ; i < 200000000 ; ++ i) { counter ++; } return arg; Split for-loop across multiple threads running on separate cores #1 #2 December 1, 2006 ©2006 Craig Zilles

How much faster? December 1, 2006 ©2006 Craig Zilles

How much faster? We’re expecting a speedup of 2 OK, perhaps a little less because of Amdahl’s Law overhead for forking and joining multiple threads But its actually slower!! Why?? Here’s the mental picture that we have – two processors, shared memory counter shared variable in memory December 1, 2006 ©2006 Craig Zilles

This mental picture is wrong! We’ve forgotten about caches! The memory may be shared, but each processor has its own L1 cache As each processor updates counter, it bounces between L1 caches Multiple bouncing slows performance December 1, 2006 ©2006 Craig Zilles

The code is not only slow, its WRONG! Since the variable counter is shared, we can get a data race Increment operation: counter++ MIPS equivalent: A data race occurs when data is accessed and manipulated by multiple processors, and the outcome depends on the sequence or timing of these events. Sequence 1 Sequence 2 Processor 1 Processor 2 Processor 1 Processor 2 lw $t0, counter lw $t0, counter addi $t0, $t0, 1 lw $t0, counter sw $t0, counter addi $t0, $t0, 1 lw $t0, counter addi $t0, $t0, 1 addi $t0, $t0, 1 sw $t0, counter sw $t0, counter sw $t0, counter counter increases by 2 counter increases by 1 !! lw $t0, counter addi $t0, $t0, 1 sw $t0, counter December 1, 2006 ©2006 Craig Zilles

What is the minimum value at the end of the program? December 1, 2006 ©2006 Craig Zilles

Atomic operations You can show that if the sequence is particularly nasty, the final value of counter may be as little as 2, instead of 200000000. To fix this, we must do the load-add-store in a single step We call this an atomic operation We’re saying: “Do this, and don’t allow other processors to interleave memory accesses while doing this.” “Atomic” in this context means “as if it were a single operation” either we succeed in completing the operation with no interruptions or we fail to even begin the operation (because someone else was doing an atomic operation) Furthermore, it should be “isolated” from other threads. x86 provides a “lock” prefix that tells the hardware: “don’t let anyone read/write the value until I’m done with it” Not the default case (because it is slower!) December 1, 2006 ©2006 Craig Zilles

What if we want to generalize beyond increments? The lock prefix only works for individual x86 instructions. What if we want to execute an arbitrary region of code without interference? Consider a red-black tree used by multiple threads. December 1, 2006 ©2006 Craig Zilles

What if we want to generalize beyond increments? The lock prefix only works for individual x86 instructions. What if we want to execute an arbitrary region of code without interference? Consider a red-black tree used by multiple threads. Best mainstream solution: Locks Implements mutual exclusion You can’t have it if I have it, I can’t have it if you have it December 1, 2006 ©2006 Craig Zilles

What if we want to generalize beyond increments? The lock prefix only works for individual x86 instructions. What if we want to execute an arbitrary region of code without interference? Consider a red-black tree used by multiple threads. Best mainstream solution: Locks Implement “mutual exclusion” You can’t have it if I have, I can’t have it if you have it when lock = 0, set lock = 1, continue lock = 0 December 1, 2006 ©2006 Craig Zilles

Lock acquire code High-level version MIPS version unsigned lock = 0; while (1) { if (lock == 0) { lock = 1; break; } What problem do you see with this? &lock spin: lw $t0, 0($a0) bne $t0, 0, spin li $t1, 1 sw $t1, 0($a0) December 1, 2006 ©2006 Craig Zilles

Race condition in lock-acquire spin: lw $t0, 0($a0) bne $t0, 0, spin li $t1, 1 sw $t1, 0($a0) December 1, 2006 ©2006 Craig Zilles

Doing “lock acquire” atomically Make sure no one gets between load and store Common primitive: compare-and-swap (old, new, addr) If the value in memory matches “old”, write “new” into memory temp = *addr; if (temp == old) { *addr = new; } else { old = temp; } x86 calls it CMPXCHG (compare-exchange) Use the lock prefix to guarantee it’s atomicity December 1, 2006 ©2006 Craig Zilles

Using CAS to implement locks Acquiring the lock: lock_acquire: li $t0, 0 # old li $t1, 1 # new cas $t0, $t1, lock beq $t0, $t1, lock_acquire # failed, try again Releasing the lock: sw $0, lock December 1, 2006 ©2006 Craig Zilles

Conclusions When parallel threads access the same data, potential for data races Even true on uniprocessors due to context switching We can prevent data races by enforcing mutual exclusion Allowing only one thread to access the data at a time For the duration of a critical section Mutual exclusion can be enforced by locks Programmer allocates a variable to “protect” shared data Program must perform: 0  1 transition before data access 1  0 transition after Locks can be implemented with atomic operations (hardware instructions that enforce mutual exclusion on 1 data item) compare-and-swap If address holds “old”, replace with “new” December 1, 2006 ©2006 Craig Zilles