Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002.

Slides:



Advertisements
Similar presentations
Parallel Research at Illinois Parallel Everywhere
Advertisements

Assurance through Enhanced Design Methodology Orlando, FL 5 December 2012 Nirav Davé SRI International This effort is sponsored by the Defense Advanced.
The road to reliable, autonomous distributed systems
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Authorizing Grid Resource Access and Consumption Erik Elmroth, Michał.
A 100,000 Ways to Fa Al Geist Computer Science and Mathematics Division Oak Ridge National Laboratory July 9, 2002 Fast-OS Workshop Advanced Scientific.
490dp Synchronous vs. Asynchronous Invocation Robert Grimm.
BROADWAY: A SOFTWARE ARCHITECTURE FOR SCIENTIFIC COMPUTING Samuel Z. Guyer and Calvin Lin The University of Texas.
Behavioral Design Outline –Design Specification –Behavioral Design –Behavioral Specification –Hardware Description Languages –Behavioral Simulation –Behavioral.
Java for High Performance Computing Jordi Garcia Almiñana 14 de Octubre de 1998 de la era post-internet.
University of Kansas Construction & Integration of Distributed Systems Jerry James Oct. 30, 2000.
Components and Architecture CS 543 – Data Warehousing.
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Future Parallel Computing Systems – what to remember from the past RAMP Workshop FCRC.
Chapter 13 Embedded Systems
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
COP4020 Programming Languages
1.3 Executing Programs. How is Computer Code Transformed into an Executable? Interpreters Compilers Hybrid systems.
Advances in Language Design
P51UST: Unix and Software Tools Unix and Software Tools (P51UST) Compilers, Interpreters and Debuggers Ruibin Bai (Room AB326) Division of Computer Science.
A Portable Virtual Machine for Program Debugging and Directing Camil Demetrescu University of Rome “La Sapienza” Irene Finocchi University of Rome “Tor.
4.x Performance Technology drivers – Exascale systems will consist of complex configurations with a huge number of potentially heterogeneous components.
Secure Virtual Architecture John Criswell, Arushi Aggarwal, Andrew Lenharth, Dinakar Dhurjati, and Vikram Adve University of Illinois at Urbana-Champaign.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
Priority Research Direction (use one slide for each) Key challenges -Fault understanding (RAS), modeling, prediction -Fault isolation/confinement + local.
CCA Common Component Architecture Manoj Krishnan Pacific Northwest National Laboratory MCMD Programming and Implementation Issues.
Cluster Reliability Project ISIS Vanderbilt University.
STORAGE ARCHITECTURE/ EXECUTIVE: Virtualization It’s not what you think you’re buying. John Blackman Independent Storage Consultant.
McGraw-Hill/Irwin © The McGraw-Hill Companies, All Rights Reserved BUSINESS PLUG-IN B17 Organizational Architecture Trends.
Composing Adaptive Software Authors Philip K. McKinley, Seyed Masoud Sadjadi, Eric P. Kasten, Betty H.C. Cheng Presented by Ana Rodriguez June 21, 2006.
SUMA: A Scientific Metacomputer Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García,
Issues in (Financial) High Performance Computing John Darlington Director Imperial College Internet Centre Fast Financial Algorithms and Computing 4th.
MAPLD Reconfigurable Computing Birds-of-a-Feather Programming Tools Jeffrey S. Vetter M. C. Smith, P. C. Roth O. O. Storaasli, S. R. Alam
Headline in Arial Bold 30pt HPC User Forum, April 2008 John Hesterberg HPC OS Directions and Requirements.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Writing Systems Software in a Functional Language An Experience Report Iavor Diatchki, Thomas Hallgren, Mark Jones, Rebekah Leslie, Andrew Tolmach.
OPERATING SYSTEM SUPPORT DISTRIBUTED SYSTEMS CHAPTER 6 Lawrence Heyman July 8, 2002.
1CPSD Software Infrastructure for Application Development Laxmikant Kale David Padua Computer Science Department.
Group 3: Architectural Design for Enhancing Programmability Dean Tullsen, Josep Torrellas, Luis Ceze, Mark Hill, Onur Mutlu, Sampath Kannan, Sarita Adve,
CORBA1 Distributed Software Systems Any software system can be physically distributed By distributed coupling we get the following:  Improved performance.
Lecture 5 Page 1 CS 111 Online Processes CS 111 On-Line MS Program Operating Systems Peter Reiher.
CCA Common Component Architecture CCA Forum Tutorial Working Group CCA Status and Plans.
Computational Science & Engineering meeting national needs Steven F. Ashby SIAG-CSE Chair March 24, 2003.
Fortress Aaron Becker Abhinav Bhatele Hassan Jafri 2 May 2006.
Parallelization Strategies Laxmikant Kale. Overview OpenMP Strategies Need for adaptive strategies –Object migration based dynamic load balancing –Minimal.
Group Mission and Approach To enhance Performance and Productivity in programming complex parallel applications –Performance: scalable to thousands of.
What’s Ahead for Embedded Software? (Wed) Gilsoo Kim
Source Level Debugging of Parallel Programs Roland Wismüller LRR-TUM, TU München Germany.
Motivation: dynamic apps Rocket center applications: –exhibit irregular structure, dynamic behavior, and need adaptive control strategies. Geometries are.
1 Asstt. Prof Navjot Kaur Computer Dept PRESENTED BY.
Université Toulouse I 1 CADUI' June FUNDP Namur Implementation Techniques for Petri Net Based Specifications of Human-Computer Dialogues.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
1 Chapter 2: Operating-System Structures Services Interface provided to users & programmers –System calls (programmer access) –User level access to system.
Defining the Competencies for Leadership- Class Computing Education and Training Steven I. Gordon and Judith D. Gardiner August 3, 2010.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Distributed Systems Architectures. Topics covered l Client-server architectures l Distributed object architectures l Inter-organisational computing.
Chapter Goals Describe the application development process and the role of methodologies, models, and tools Compare and contrast programming language generations.
Support for Program Analysis as a First-Class Design Constraint in Legion Michael Bauer 02/22/17.
Chapter 1 Introduction.
Chapter 1 Introduction.
Abstract Machine Layer Research in VGrADS
Many-core Software Development Platforms
Mark McKelvin EE249 Embedded System Design December 03, 2002
Presentation transcript:

Programming Models & Runtime Systems Breakout Report MICS PI Meeting, June 27, 2002

Question 1 What are the unfunded (and/or) under- funded) basic research issues critical to our ability to exploit next-generation systems?

Programming Models for 1M threads Most common programming languages are based on linear execution of a single thread; SPMD is an implicit array of such single threads. It is hard to imagine this can bring us to 1M threads (herding weasels). We need imaginative research on programming models that are based on different abstractions (no variables?, no sequential execution?)

Locality Handling Current programming models and languages have no good representation for locality; languages focus on control and computation, not on data location and data movement. What are ways of providing different levels of abstraction for data location, data movement, and locality? What is the equivalent of procedural abstraction that we have for computation?

Programmer Productivity Higher level programming languages –Bring Mathematica/Matlab like environments to HPC –Languages that support very fast turnaround on relatively small (but computationally heavy) codes, to enable fast experimentation, or fast time to solution. –Support the ability to later refine it into tuned code and to integrate it into larger systems Rapid prototyping for HPC

Fault Tolerance Should it surface to runtime and/or programming model? What is the right combination of hw/system/runtime/PL? How do we avoid global checkpoints? How do we support high performance computing reactive environments? Trustworthy computing in general: solutions to software bugs, intruder attacks…

On the Fly Code Tuning Scientific codes run for long times. By collecting data during execution one can continuously tune the code. This requires the right language/compiler/runtime environment. –Same idea applies a fortiori to libraries

Flexible Run-Time Suppose that the architecture provides finer grain, more fluid “objects” (threads, data objects), to facilitate migration, load balancing, fault isolation. How do we build PLs and runtime to take advantage of it?

Compiler Infrastructure Research in parallelizing compilers and compilation for scientific computing is shrinking. Vendors are disinvesting. Should we invest in common compiler infrastructure? (e.g., front ends) How do we develop more modular compilation approaches, to support plugin compiler components? Compiler toolkits to facilitate research sharing Locality aware code generation techniques Compilers that interact with user to choose optimizations, and are actively used for performance analysis

Semantically Rich Interfaces Find ways of conveying more information at library interfaces – e.g., semantic information that can be provided by user, or deduced by compiler, or performance information, or common use information. This e.g. to enable separate optimization of library invocations and, in general, modular optimization.

Question 2 What do you need most from the other four CS research groups to enable your future progress and enhance the effectiveness of your work?

Lots of Help Viz/data: Large scale performance/behavior visualization –Current tools scale badly –Discrete data visualization as opposed to continuous fields –Relate program, performance data, compiler decisions… Interoperability/portability – CCA ideas could be expanded to provide semantic information across interfaces, including performance information to support global resource management –Ways to express component semantics so as to support composition –CCA component ideas applied to infrastructure components as well as application components: e.g. compilers, debuggers

Even More Help Performance: Real time performance measurements that support feedback directed optimizations. Hardware/system support for performance information aggregation OS: –An OS that does not require OS bypass: bypass is needed because OS does not have right interface/function/performance. Develop virtual machines with right mechanisms –Hierarchical and dynamic resource management; ability to dynamically allocate all system resources, to grow and shrink jobs; interface for dialogue and negotiation between system and runtime for resource allocation (including distributed systems); two-level allocation, to job and within job for all resources; application specific policies. –Fast job startup/rundown –Support for fault tolerance, e.g. virtualization and remapping of all resources. Also support for fault tolerance by application.

Question 3 What are the current gaps in the MICS CS research portfolio related to peta-scale computing? What new topic areas should be added to the MICS portfolio?

Other things that would help but aren’t being addressed Formal methods for verification, static and dynamic checking; assertion languages, dynamic invariant checking, introspection… Debugging (relates to formal methods) Compiler support for HPC Architecture research –Funding for new programs or for tie ins with existing projects (DARPA funding) Computing facilities for simulations (large scale), new hardware testing facilities (small scale toy shop)

The End