2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.

Slides:



Advertisements
Similar presentations
WestGrid Collaboration and Visualization Brian Corrie Collaboration and Visualization Coordinator WestGrid/SFU.
Advertisements

The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
Gabrielle Allen*, Thomas Dramlitsch*, Ian Foster †, Nicolas Karonis ‡, Matei Ripeanu #, Ed Seidel*, Brian Toonen † * Max-Planck-Institut für Gravitationsphysik.
GridFTP: File Transfer Protocol in Grid Computing Networks
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus Gabrielle Allen, Thomas Dramlitsch, Ian Foster,
Tools for Engineering Analysis of High Performance Parallel Programs David Culler, Frederick Wong, Alan Mainwaring Computer Science Division U.C.Berkeley.
The Cactus Portal A Case Study in Grid Portal Development Michael Paul Russell Dept of Computer Science The University of Chicago
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Cactus Code and Grid Programming Here at GGF1: Gabrielle Allen, Gerd Lanfermann, Thomas Radke, Ed Seidel Max Planck Institute for Gravitational Physics,
DANSE Central Services Michael Aivazis Caltech NSF Review May 23, 2008.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
GridSphere for GridLab A Grid Application Server Development Framework By Michael Paul Russell Dept Computer Science University.
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
Cactus-G: Experiments with a Grid-Enabled Computational Framework Dave Angulo, Ian Foster Chuang Liu, Matei Ripeanu, Michael Russell Distributed Systems.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Center for Programming Models for Scalable Parallel Computing: Project Meeting Report Libraries, Languages, and Execution Models for Terascale Applications.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
Cactus Project & Collaborative Working Gabrielle Allen Max Planck Institute for Gravitational Physics, (Albert Einstein Institute)
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
MIMD Distributed Memory Architectures message-passing multicomputers.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Nomadic Grid Applications: The Cactus WORM G.Lanfermann Max Planck Institute for Gravitational Physics Albert-Einstein-Institute, Golm Dave Angulo University.
The Globus Project: A Status Report Ian Foster Carl Kesselman
Opportunities in Parallel I/O for Scientific Data Management Rajeev Thakur and Rob Ross Mathematics and Computer Science Division Argonne National Laboratory.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Grid the united computing power Jian He Amit Karnik.
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
Cactus/TIKSL/KDI/Portal Synch Day. Agenda n Main Goals:  Overview of Cactus, TIKSL, KDI, and Portal efforts  present plans for each project  make sure.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
MPI: Portable Parallel Programming for Scientific Computing William Gropp Rusty Lusk Debbie Swider Rajeev Thakur.
Connections to Other Packages The Cactus Team Albert Einstein Institute
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
VisPortal Project developer’s experience C.E.Siegerist, J. Shalf, E.W. Bethel NERSC/LBNL Visualization Group T.J. Jankun-Kelley, O. Kreylos, K.L. Ma CIPIC/UC.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
National Energy Research Scientific Computing Center (NERSC) Visportal : interface to grid enabled NERC resources Cristina Siegerist NERSC Center Division,
Albert-Einstein-Institut Exploring Distributed Computing Techniques with Ccactus and Globus Solving Einstein’s Equations, Black.
Dynamic Grid Computing: The Cactus Worm The Egrid Collaboration Represented by: Ed Seidel Albert Einstein Institute
Grid-enabled Probabilistic Model Checking with PRISM Yi Zhang, David Parker, Marta Kwiatkowska University of Birmingham.
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
Cactus Workshop - NCSA Sep 27 - Oct Generic Cactus Workshop: Summary and Future Ed Seidel Albert Einstein Institute
Run-time Adaptation of Grid Data Placement Jobs George Kola, Tevfik Kosar and Miron Livny Condor Project, University of Wisconsin.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Parallel Computing Globus Toolkit – Grid Ayaka Ohira.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Productive Performance Tools for Heterogeneous Parallel Computing
The Cactus Team Albert Einstein Institute
Exploring Distributed Computing Techniques with Ccactus and Globus
University of Technology
Scalable Systems Software for Terascale Computer Centers
Presentation transcript:

2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries Mike Minkoff/ANL

2/22/2001Greenbook 2001/OASCR2 Outline MPI Extensions and Programming Models Grid-based computing -- MPICH-G2 Grid-based Climate Simulation

2/22/2001Greenbook 2001/OASCR3 Standards-Based Programming Environments MPI (Message-Passing Interface) standard defines a portable library interface for the message- passing model of parallel computation. MPICH is a portable, high-performance implementation of the Standard. Many vendor implementations of MPI have been based on MPICH. Research continues on implementation issues, necessary for increased performance and scalability.

2/22/2001Greenbook 2001/OASCR4 Beyond Message-Passing MPI-2 is an standard extension to the message-passing model specification, with –Parallel I/O –Dynamic creation of processes –One-sided remote memory access MPICH will soon provide a portable implementation of MPI-2.

2/22/2001Greenbook 2001/OASCR5 Experimental Programming Models Robust implementations of language extensions, such as co-array Fortran, are needed if such approaches are to evolve into real productivity enhancers for DOE applications. More speculatively, new memory-centric programming and execution models will be needed for future machine architectures. Argonne is leading an effort to explore both near- term and longer-term programming model issues.

2/22/2001Greenbook 2001/OASCR6 MPICH-G2 Developed by Karonis (NIU) and Toonen (ANL) Based on ANL’s MPICH library (Gropp & Lusk) A grid-enabled MPI

2/22/2001Greenbook 2001/OASCR7 MPICH-G2: A grid-enabled MPI Uses many Globus services –job startup –GSI for security –data conversion –asynchronous socket communication (Globus I/O) Multi-protocol support –vendor-supplied MPI for intra-machine messages –TCP for inter-machine messages

2/22/2001Greenbook 2001/OASCR8 MPI “grid” Applications MPI applications wanting to solve problems too big for any single computer Use MPICH-G2 to couple multiple computers forming a computational grid Modify application to respect slower LAN/WAN performance

2/22/2001Greenbook 2001/OASCR9 Cactus Developed at Max Planck Institute for Gravitational Physics, Germany Originally developed as framework for the numerical solution to Einstein’s Equations Evolved into general-purpose problem solving environment that provides a modular and parallel computational framework atop MPI

2/22/2001Greenbook 2001/OASCR10 Cactus-G: A Case Study Cactus-G: coupled Cactus and MPICH-G2 Multiple machines –T3E 900, IBM SP2 - NERSC –Origin O2K, IBM SP - ANL –Origin O2K - NCSA –IBM SP - SDSC

2/22/2001Greenbook 2001/OASCR11 Our Experience Primary problem: WAN performance between sites far below what was “advertised” Machines with single portal (processor/network interface) to the WAN contributed to WAN performance problem Our conclusions: –Don’t need a bigger machine –Need better communication performance between existing machines –Machine architecture need to supports high-bandwidth, multi-stream, off-machine communication

2/22/2001Greenbook 2001/OASCR12 Next Generation NERSC - Grid Computing Grid access using Globus to enable –remote job submission to all NERSC computer systems –data transfer to/from NERSC computers using GRID technologies, including tape storage –develop a new generation of GRID enabled tools to facilitate job submission, monitoring and analysis

2/22/2001Greenbook 2001/OASCR13 Next Generation NERSC - Climate Computing Integrating climate models requires –multi-teraflop scale computer systems able to deliver teraflop performance to a single user on a routine basis –ability to perform long duration simulations which requires access to a large number of processors and TB of disk space –enhanced software tools for analysis & visualization of TB of model results

2/22/2001Greenbook 2001/OASCR14 Science Projects Three-dimensional premixed turbulent flames with full chemistry model (J. Bell, LBNL) Discrete Event Simulation (M. Novotny, FSU)

2/22/2001Greenbook 2001/OASCR15 Questions to Ponder What Should NERSC Support Spend Resources On? Remote and secure job submission and storage access. –Support for PSEs Role of Nanotech/SCIDAC simulation reqs. Diversity vs. single- source machine Role of Libraries (IMSL, EISPACK) that are dated and/or single processor Role of NERSC Math Server Integrated debugging/performance tool Extend UHU to OASCR/App. matching