Concurrent Aggregates (CA) Andrew A. Chien and William J. Dally Presented by: John Lynn and Ryan Wu.

Slides:



Advertisements
Similar presentations
Threads, SMP, and Microkernels
Advertisements

MPI Message Passing Interface
Ch:8 Design Concepts S.W Design should have following quality attribute: Functionality Usability Reliability Performance Supportability (extensibility,
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Programming Languages Marjan Sirjani 2 2. Language Design Issues Design to Run efficiently : early languages Easy to write correctly : new languages.
Classes & Objects Computer Science I Last updated 9/30/10.
Object-Oriented Analysis and Design
Figure 2.8 Compiler phases Compiling. Figure 2.9 Object module Linking.
ECE669 L11: Static Routing Architectures March 4, 2004 ECE 669 Parallel Computer Architecture Lecture 11 Static Routing Architectures.
CS533 Concepts of Operating Systems Class 6 The Duality of Threads and Events.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
3.5 Interprocess Communication
C++ fundamentals.
Advances in Language Design
TCU CoSc Introduction to Programming (with Java) Getting to Know Java.
Modern Concurrency Abstractions for C# by Nick Benton, Luca Cardelli & C´EDRIC FOURNET Microsoft Research.
Overview of Previous Lesson(s) Over View  OOP  A class is a data type that you define to suit customized application requirements.  A class can be.
LIGO-G Z 8 June 2001L.S.Finn/LDAS Camp1 How to think about parallel programming.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
CS533 Concepts of Operating Systems Jonathan Walpole.
Crowd Management System A presentation by Abhinav Golas Mohit Rajani Nilay Vaish Pulkit Gambhir.
(Superficial!) Review of Uniprocessor Architecture Parallel Architectures and Related concepts CS 433 Laxmikant Kale University of Illinois at Urbana-Champaign.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 3: Operating-System Structures System Components Operating System Services.
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
1 Programming Languages and the Software Production Process Informal Cardelli’s metrics of programming languages fitness to real-time applications: Economy.
The Vesta Parallel File System Peter F. Corbett Dror G. Feithlson.
An Object-Oriented Approach to Programming Logic and Design Fourth Edition Chapter 6 Using Methods.
Processes Introduction to Operating Systems: Module 3.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
Lecture 2 Page 1 CS 111 Online System Services for OSes One major role of an operating system is providing services – To human users – To applications.
Learners Support Publications Object Oriented Programming.
9-Dec Dec-15  INTRODUCTION.  FEATURES OF OOP.  ORGANIZATION OF DATA & FUNCTION IN OOP.  OOP’S DESIGN.
M. Accetta, R. Baron, W. Bolosky, D. Golub, R. Rashid, A. Tevanian, and M. Young MACH: A New Kernel Foundation for UNIX Development Presenter: Wei-Lwun.
Distributed Systems 2 Distributed Processing. Process A process is a logical representation of a physical processor that executes program code and has.
Introduction to OOP CPS235: Introduction.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Object-Oriented Design Concepts University of Sunderland.
Concurrent Object-Oriented Programming Languages Chris Tomlinson Mark Scheevel.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
Operating Systems Unit 2: – Process Context switch Interrupt Interprocess communication – Thread Thread models Operating Systems.
Parallel Computing Presented by Justin Reschke
Copyright © 2005 – Curt Hill MicroProgramming Programming at a different level.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Operating Systems Overview: Using Hardware.
Typed Group Communication & Object-Oriented SPMD Laurent Baduel.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Silberschatz, Galvin and Gagne ©2009Operating System Concepts – 8 th Edition Chapter 4: Threads.
Adding Concurrency to a Programming Language Peter A. Buhr and Glen Ditchfield USENIX C++ Technical Conference, Portland, Oregon, U. S. A., August 1992.
Introduction to Operating Systems Concepts
Chapter 4 – Thread Concepts
Threads Some of these slides were originally made by Dr. Roger deBry. They include text, figures, and information from this class’s textbook, Operating.
Visit for more Learning Resources
Chapter 4: Threads.
Distributed Shared Memory
CS5102 High Performance Computer Systems Thread-Level Parallelism
The Mach System Sri Ramkrishna.
Advanced Topics in Concurrency and Reactive Programming: Asynchronous Programming Majeed Kassis.
Parallel Programming By J. H. Wang May 2, 2017.
Chapter 4 – Thread Concepts
Chapter 2 Processes and Threads Today 2.1 Processes 2.2 Threads
Computer Engg, IIT(BHU)
Unit# 8: Introduction to Computer Programming
Chapter 4: Threads.
An Integrated Synchronization and Consistency Protocol for the Implementation of a High-Level Parallel Programming Language Martin C. Rinard University.
Operating Systems : Overview
Operating Systems : Overview
Ch. 1 Vocabulary Alice.
Presentation transcript:

Concurrent Aggregates (CA) Andrew A. Chien and William J. Dally Presented by: John Lynn and Ryan Wu

Outline Motivation Serializing abstraction and concurrent aggregates Features Programming with CA Discussion

Motivation Parallel programs to take advantage of the massive hardware concurrency Two major concerns at that time: Programming should be relatively easy (the complexity of writing object oriented languages) Language must allow to express sufficient concurrency Hierarchies of abstractions are used to manage complexity by serializing the abstractions, which may cause significant loss of concurrency

Serializing Abstractions Each object accepts one msg a time Hierarchical abstractions (abstractions built from other abstractions) are built from single objects Most concurrent object-oriented languages serialize hierarchical abstractions This leaves programmers with the choice of reduced concurrency or working without useful levels of abstraction

Concurrent Aggregates Concurrent aggregates allow programmers to build hierarchical abstractions without serialization. Each aggregate is multi- access to receive many messages simultaneously Result: Increased message rate for lower levels

Background J-machine: 10 5 nodes, 64K words local memory, fast communication Fine grain computation: msg passing, context switching and fast task creation and dispatch No shared memory, and each of the nodes executes instructions from local mem. Concurrency in CA is derived from asynchronous messages sends and synchronization through context futures Similar settings in sensor networks?

Four important features Intra-aggregate Addressing: allows representatives of an aggregate to compute the names of other parts of the aggregate Delegation: to piece together one aggregate’s behavior from the behavior of others First Class Messages: Allow programmers to write message manipulation abstractions First Class Continuations: enables programs to code synchronizing abstractions such as futures

Combining Tree Message form: Combine

Programming with Concurrent Aggregates: Syntax Aggregate (aggregate instance-variable* (parameters param-name+)(initial exp+)) Class (class instance-variable* (parameters param-name+)(initial exp+)) (global initial-exp)

Example of Aggregate (aggregate counter count (parameters number_reps icount) (initial number_reps) (forall index from 1 below number_reps (set_count (sibling group index) icount)) (set_count self icount)))

Method and handler Handler (handler (arg*) exp+) Method (method (arg*) exp+) Delegate (delegate instance-variable)

Control constructs and message sends

Aggregation and Naming Aggregate: Homogeneous collection of objects (representatives) which are grouped together and may be referenced by a single aggregate name Messages sent to the aggregate are directed to arbitrary representatives (sibling group )

Example

Delegation and Message Handling Handlers are methods for aggregates Delegates specify targets to handle messages :rest delegate can handle messages with no specified handler

Bank Example Bank aggregate composed of tellers aggregate, loan officer, and manager Unusual manages delegated to manager

Message Handling Messages are first class objects in CA Can be created and modified programmatically Messages are passed by-value but copying is only one level deep

Continuations What is a continuation? Separates code from synchronization Futures as continuations

Futures Value method returns value immediately if available, otherwise adds continuation to a list Set_value method forwards value to all continuations on deferred list

Objects as Continuations A continuation is just an object that expects a reply Complex synchronization structures can be implemented as continuations

Barrier Barrier object implements reply method Once, maxcount is reached, all computations are resumed

Conclusion CA provides a programming abstraction for dealing with many processors Abstraction takes much of the complexity out of parallelizing a computation First class continuations allow for modular synchronization structures

Discussion How might these concepts parallel what we are trying to accomplish with sensor networks? How do these concepts differ? Of what use could concurrent aggregates be in programming a sensor network?