Chapel: Status/Community Brad Chamberlain Cray Inc. CSEP 524 May 20, 2010.

Slides:



Advertisements
Similar presentations
.NET Technology. Introduction Overview of.NET What.NET means for Developers, Users and Businesses Two.NET Research Projects:.NET Generics AsmL.
Advertisements

U NIVERSITY OF D ELAWARE C OMPUTER & I NFORMATION S CIENCES D EPARTMENT Optimizing Compilers CISC 673 Spring 2009 Potential Languages of the Future Chapel,
IGOR: A System for Program Debugging via Reversible Execution Stuart I. Feldman Channing B. Brown slides made by Qing Zhang.
Carnegie Mellon Lessons From Building Spiral The C Of My Dreams Franz Franchetti Carnegie Mellon University Lessons From Building Spiral The C Of My Dreams.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
The Path to Multi-core Tools Paul Petersen. Multi-coreToolsThePathTo 2 Outline Motivation Where are we now What is easy to do next What is missing.
Data Analytics and Dynamic Languages Lee E. Edlefsen, Ph.D. VP of Engineering 1.
490dp Synchronous vs. Asynchronous Invocation Robert Grimm.
Virtual Machines Measure Up John Staton Karsten Steinhaeuser University of Notre Dame December 15, 2005 Graduate Operating Systems, Fall 2005 Final Project.
Computer Science 162 Section 1 CS162 Teaching Staff.
Programming Languages Structure
Introduction to High-Level Language Programming
A Top Level Overview of Parallelism from Microsoft's Point of View in 15 minutes IDC HPC User’s Forum April 2010 David Rich Director Strategic Business.
© Janice Regan, CMPT 128, Jan CMPT 128 Introduction to Computing Science for Engineering Students Creating a program.
ET E.T. International, Inc. X-Stack: Programming Challenges, Runtime Systems, and Tools Brandywine Team May2013.
Dilemma of Parallel Programming Xinhua Lin ( 林新华 ) HPC Lab of 17 th Oct 2011.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
CCS APPS CODE COVERAGE. CCS APPS Code Coverage Definition: –The amount of code within a program that is exercised Uses: –Important for discovering code.
TRACEREP: GATEWAY FOR SHARING AND COLLECTING TRACES IN HPC SYSTEMS Iván Pérez Enrique Vallejo José Luis Bosque University of Cantabria TraceRep IWSG'15.
Threads, Thread management & Resource Management.
Microsoft Research Faculty Summit Panacea or Pandora’s Box? Software Transactional Memory Panacea or Pandora’s Box? Christos Kozyrakis Assistant.
Parallel Interactive Computing with PyTrilinos and IPython Bill Spotz, SNL (Brian Granger, Tech-X Corporation) November 8, 2007 Trilinos Users Group Meeting.
Threads in Java. History  Process is a program in execution  Has stack/heap memory  Has a program counter  Multiuser operating systems since the sixties.
Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.
Presented by High Productivity Language and Systems: Next Generation Petascale Programming Wael R. Elwasif, David E. Bernholdt, and Robert J. Harrison.
Presented by High Productivity Language Systems: Next-Generation Petascale Programming Aniruddha G. Shet, Wael R. Elwasif, David E. Bernholdt, and Robert.
Co-Array Fortran Open-source compilers and tools for scalable global address space computing John Mellor-Crummey Rice University.
CSC 230: C and Software Tools Rudra Dutta Computer Science Department Course Introduction.
1 Causal-Consistent Reversible Debugging Ivan Lanese Focus research group Computer Science and Engineering Department University of Bologna/INRIA Bologna,
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Background: Operating Systems Brad Karp UCL Computer Science CS GZ03 / M th November, 2008.
Colorama: Architectural Support for Data-Centric Synchronization Luis Ceze, Pablo Montesinos, Christoph von Praun, and Josep Torrellas, HPCA 2007 Shimin.
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Programming with Visual Studio 2005.NET A short review of the process.
Numerical Libraries Project Microsoft Incubation Group Mary Beth Hribar Microsoft Corporation CSCAPES Workshop June 10, 2008 Copyright Microsoft Corporation,
Copyright ©: University of Illinois CS 241 Staff1 Threads Systems Concepts.
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
1CPSD Software Infrastructure for Application Development Laxmikant Kale David Padua Computer Science Department.
Source: Operating System Concepts by Silberschatz, Galvin and Gagne.
Chapter 1 Introduction. Chapter 1 - Introduction 2 The Goal of Chapter 1 Introduce different forms of language translators Give a high level overview.
Compilers: Overview/1 1 Compiler Structures Objective – –what are the main features (structures) in a compiler? , Semester 1,
1 Qualifying ExamWei Chen Unified Parallel C (UPC) and the Berkeley UPC Compiler Wei Chen the Berkeley UPC Group 3/11/07.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
Parallelism without Concurrency Charles E. Leiserson MIT.
School of Computer Science & Information Technology G6DICP - Lecture 6 Errors, bugs and debugging.
Programmability Hiroshi Nakashima Thomas Sterling.
Hardware process When the computer is powered up, it begins to execute fetch-execute cycle for the program that is stored in memory at the boot strap entry.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
CS533 Concepts of Operating Systems Jonathan Walpole.
Chapel: User-Defined Distributions Brad Chamberlain Cray Inc. CSEP 524 May 20, 2010.
Single Node Optimization Computational Astrophysics.
Source Level Debugging of Parallel Programs Roland Wismüller LRR-TUM, TU München Germany.
New Language Features For Parallel and Asynchronous Execution Morten Kromberg Dyalog LTD Dyalog’13.
Operating Systems Unit 2: – Process Context switch Interrupt Interprocess communication – Thread Thread models Operating Systems.
Background Computer System Architectures Computer System Software.
Introduction to Operating Systems Concepts
Computer System Structures
Advanced Computer Systems
Why study programming languages?
SOFTWARE DESIGN AND ARCHITECTURE
Chapter 4: Multithreaded Programming
Introduction Enosis Learning.
Memory Protection: Kernel and User Address Spaces
Introduction Enosis Learning.
Creating Windows Store Apps Using Visual Basic
Prof. Leonardo Mostarda University of Camerino
WG4: Language Integration & Tools
Compiler Structures 1. Overview Objective
Presentation transcript:

Chapel: Status/Community Brad Chamberlain Cray Inc. CSEP 524 May 20, 2010

Outline Chapel Context Global-view Programming Models Language Overview  Status, Collaborations, Future Work

The Chapel Team  Interns Hannah Hemmaplardh (`10–UW) Jonathan Turner (`10 – Boulder) Jacob Nelson (`09 – UW) Albert Sidelnik (`09 – UIUC) Andy Stone (`08 – Colorado St) James Dinan (`07 – Ohio State) Robert Bocchino (`06 – UIUC) Mackale Joyner (`05 – Rice)  Alumni David Callahan Roxana Diaconescu Samuel Figueroa Shannon Hoffswell Mary Beth Hribar Mark James John Plevyak Wayne Wong Hans Zima Sung-Eun Choi, David Iten, Lee Prokowich, Steve Deitz, Brad Chamberlain, and half of Greg Titus

Chapel Work  Chapel Team’s Focus: specify Chapel syntax and semantics implement open-source prototype compiler for Chapel perform code studies of benchmarks, apps, and libraries in Chapel do community outreach to inform and learn from users/researchers support users of code releases refine language based on all these activities implement outreach support release code studies specify Chapel

Compiling Chapel Chapel Source Code Chapel Executable Chapel Standard Modules Chapel Compiler

Chapel Compiler Architecture Generated C Code Chapel Source Code Standard C Compiler & Linker Chapel Executable Chapel Compiler 1-sided Messaging, Threading Libraries Runtime Support Libraries (in C) Chapel Standard Modules Internal Modules (written in Chapel) Chapel-to-C Compiler

Chapel and the Community  Our philosophy: help the parallel community understand what we are doing develop Chapel as an open-source project encourage external collaborations over time, turn language over to the community  Goals: to get feedback that will help make the language more useful to support collaborative research efforts to accelerate the implementation to aid with adoption

Chapel Release  Current release: version 1.1 (April 15 th, 2010)  Supported environments: UNIX/Linux, Mac OS X, Cygwin  How to get started: 1. Download from: 2. Unpack tar.gz file 3. See top-level README  for quick-start instructions  for pointers to next steps with the release  Your feedback desired!  Remember: a work-in-progress  it’s likely that you will find problems with the implementation  this is still a good time to influence the language’s design

Implementation Status (v1.1)  Base language: stable (some gaps and bugs remain)  Task parallel: stable multi-threaded implementation of tasks, sync variables atomic sections are an area of ongoing research with U. Notre Dame  Data parallel: stable multi-threaded data parallelism for dense domains/arrays other domain types have a single-threaded reference implementation  Locality: stable locale types and arrays stable task parallelism across multiple locales initial support for some distributions: Block, Cyclic, Block-Cyclic  Performance: has received much attention in designing the language yet minimal implementation effort to date

Selected Collaborations (see chapel.cray.com for complete list) Notre Dame/ORNL (Peter Kogge, Srinivas Sridharan, Jeff Vetter): Asynchronous Software Transactional Memory over distributed memory UIUC (David Padua, Albert Sidelnik): Chapel for hybrid CPU-GPU computing BSC/UPC (Alex Duran): Chapel over Nanos++ user-level tasking U/Malaga (Rafa Asenjo, Maria Gonzales, Rafael Larossa): Parallel file I/O for whole-array reads/writes University of Colorado, Boulder (Jeremy Siek, Jonathan Turner): Concepts/interfaces for improved support for generic programming PNNL/CASS-MT (John Feo, Daniel Chavarria): Hybrid computing in Chapel; performance tuning for the Cray XMT; ARMCI port ORNL (David Bernholdt et al.; Steve Poole et al.): Chapel code studies – Fock matrices, MADNESS, Sweep3D, coupled models, … U Oregon, Paratools Inc.: Chapel performance analysis using Tau (Your name here?)

Collaboration Opportunities (see chapel.cray.com for more details)  memory management policies/mechanisms  dynamic load balancing: task throttling and stealing  parallel I/O and checkpointing  exceptions; resiliency  language interoperability  application studies and performance optimizations  index/subdomain semantics and optimizations  targeting different back-ends (LLVM, MS CLR, …)  runtime compilation  library support  tools debuggers, performance analysis, IDEs, interpreters, visualizers  database-style programming  (your ideas here…)

Chapel and Education  If I were to offer a parallel programming class, I’d want to teach about: data parallelism task parallelism concurrency synchronization locality/affinity deadlock, livelock, and other pitfalls performance tuning …  I don’t think there’s a good language out there… …for teaching all of these things …for teaching some of these things at all …until now: I think Chapel has the potential to play a crucial role here

Our Next Steps  Expand our set of supported distributions  Continue to improve performance  Continue to add missing features  Expand the set of codes that we are studying  Expand the set of architectures that we are targeting  Support the public release  Continue to support collaborations and seek out new ones  Continue to expand our team

Summary Chapel strives to greatly improve Parallel Productivity via its support for… …general parallel programming …global-view abstractions …control over locality …multiresolution features …modern language concepts and themes