The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.

Slides:



Advertisements
Similar presentations
DAS 3 and StarPlane have Landed Architecture, Status... Freek Dijkstra.
Advertisements

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies Grid.
European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies Scalability.
European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies Experiences.
The First 16 Years of the Distributed ASCI Supercomputer Henri Bal Vrije Universiteit Amsterdam COMMIT/
Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences DAS-1 DAS-2 DAS-3.
Opening Workshop DAS-2 (Distributed ASCI Supercomputer 2) Project vrije Universiteit.
CCGrid2013 Panel on Clouds Henri Bal Vrije Universiteit Amsterdam.
7 april SP3.1: High-Performance Distributed Computing The KOALA grid scheduler and the Ibis Java-centric grid middleware Dick Epema Catalin Dumitrescu,
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal Vrije Universiteit Amsterdam.
The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
The Ibis model as a paradigm for programming distributed systems Henri Bal Vrije Universiteit Amsterdam (from Grids and Clouds to Smartphones)
Parallel Programming on Computational Grids. Outline Grids Application-level tools for grids Parallel programming on grids Case study: Ibis.
More information Examination (date: 19 december) Written exam based on: - Reader (without paper 12 by Marsland and Campbell) - Awari paper (available from.
Distributed supercomputing on DAS, GridLab, and Grid’5000 Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Virtual Laboratory for e-Science (VL-e) Henri Bal Department of Computer Science Vrije Universiteit Amsterdam vrije Universiteit.
Virtual Laboratory for e-Science (VL-e) Henri Bal Department of Computer Science Vrije Universiteit Amsterdam vrije Universiteit.
Henri Bal Vrije Universiteit Amsterdam vrije Universiteit.
Parallel Programming on Computational Grids. Outline Grids Application-level tools for grids Parallel programming on grids Case study: Ibis.
June 3, 2015 Synthetic Grid Workloads with Ibis, K OALA, and GrenchMark CoreGRID Integration Workshop, Pisa A. Iosup, D.H.J. Epema Jason Maassen, Rob van.
Real-World Distributed Computing with Ibis Henri Bal Vrije Universiteit Amsterdam.
Parallel Programming Henri Bal Rob van Nieuwpoort Vrije Universiteit Amsterdam Faculty of Sciences.
Inter-Operating Grids through Delegated MatchMaking Alexandru Iosup, Dick Epema, Hashim Mohamed,Mathieu Jan, Ozan Sonmez 3 rd Grid Initiative Summer School,
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal, Jason Maassen, Rob van Nieuwpoort, Thilo Kielmann, Niels Drost, Ceriel Jacobs, Frank.
DAS-3/Grid’5000 meeting: 4th December The KOALA Grid Scheduler over DAS-3 and Grid’5000 Processor and data co-allocation in grids Dick Epema, Alexandru.
Virtual Laboratory for e-Science (VL-e) Henri Bal Department of Computer Science Vrije Universiteit Amsterdam vrije Universiteit.
Grid Adventures on DAS, GridLab and Grid'5000 Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
Ibis: a Java-centric Programming Environment for Computational Grids Henri Bal Vrije Universiteit Amsterdam vrije Universiteit.
4 december, DAS3-G5K Interconnection Workshop Hosted by the VU (Thilo Kielmann), Amsterdam Dick Epema (TUD) and Franck Cappello (INRIA) Parallel.
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal Vrije Universiteit Amsterdam.
Parallel Programming on Computational Grids. Outline Grids Application-level tools for grids Parallel programming on grids Case study: Ibis.
The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
Parallel Programming Henri Bal Vrije Universiteit Faculty of Sciences Amsterdam.
4 december, The Distributed ASCI Supercomputer The third generation Dick Epema (TUD) (with many slides from Henri Bal) Parallel and Distributed.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Parallel Programming Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Going Dutch: How to Share a Dedicated Distributed Infrastructure for Computer Science Research Henri Bal Vrije Universiteit Amsterdam.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Edge Based Cloud Computing as a Feasible Network Paradigm(1/27) Edge-Based Cloud Computing as a Feasible Network Paradigm Joe Elizondo and Sam Palmer.
INFSO-SSA International Collaboration to Extend and Advance Grid Education ICEAGE Forum Meeting at EGEE Conference, Geneva Malcolm Atkinson & David.
DAS 1-4: 14 years of experience with the Distributed ASCI Supercomputer Henri Bal Vrije Universiteit Amsterdam.
This work was carried out in the context of the Virtual Laboratory for e-Science project. This project is supported by a BSIK grant from the Dutch Ministry.
Panel Abstractions for Large-Scale Distributed Systems Henri Bal Vrije Universiteit Amsterdam.
The Ibis Project: Simplifying Grid Programming & Deployment Henri Bal Vrije Universiteit Amsterdam.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Dutch Tier Hardware Farm size –now: 150 dual nodes + scavenging 200 nodes –buildup to ~1500 up-to-date nodes in 2007 Network –now: 2 Gbit/s internatl.
A High Performance Middleware in Java with a Real Application Fabrice Huet*, Denis Caromel*, Henri Bal + * Inria-I3S-CNRS, Sophia-Antipolis, France + Vrije.
More on Adaptivity in Grids Sathish S. Vadhiyar Source/Credits: Figures from the referenced papers.
ICT infrastructure for Science: e-Science developments Henri Bal Vrije Universiteit Amsterdam.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Wide-Area Parallel Computing in Java Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences vrije Universiteit.
Parallel Programming Henri Bal Vrije Universiteit Faculty of Sciences Amsterdam.
Parallel Programming Henri Bal Vrije Universiteit Faculty of Sciences Amsterdam.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Parallel Computing on Wide-Area Clusters: the Albatross Project Aske Plaat Thilo Kielmann Jason Maassen Rob van Nieuwpoort Ronald Veldema Vrije Universiteit.
SCARIe: using StarPlane and DAS-3 Paola Grosso Damien Marchel Cees de Laat SNE group - UvA.
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
High level programming for the Grid Gosia Wrzesinska Dept. of Computer Science Vrije Universiteit Amsterdam vrije Universiteit.
Fault tolerance, malleability and migration for divide-and-conquer applications on the Grid Gosia Wrzesińska, Rob V. van Nieuwpoort, Jason Maassen, Henri.
Virtual Laboratory Amsterdam L.O. (Bob) Hertzberger Computer Architecture and Parallel Systems Group Department of Computer Science Universiteit van Amsterdam.
Vrije Universiteit Amsterdam
Cluster Computers.
Presentation transcript:

The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences

Introduction DAS has a long history and continuity -DAS-1 (1997), DAS-2 (2002), DAS-3 (July 2006?) Simple Computer Science grid that works -Over 200 users, 25 Ph.D. theses -Stimulated new lines of CS research -Used in international experiments Colorful future: DAS-3 is going optical

Outline History Impact on Dutch computer science research -Trend: cluster computing  distributed computing  Grids  Virtual laboratories Example research projects - Ibis, Satin, Zorilla Grid experiments on DAS-2, GridLab, Grid’5000 Future: DAS-3

Organization Research schools (Dutch product from 1990s) -Stimulate top research & collaboration -Organize Ph.D. education ASCI: -Advanced School for Computing and Imaging (1995-) -About 100 staff and 100 Ph.D. students DAS proposals written by ASCI committees -Chaired by Tanenbaum (DAS-1), Bal (DAS-2, DAS-3)

Funding Funded mainly by NWO (Dutch national science foundation) Motivation: CS needs its own infrastructure for -Systems research and experimentation -Distributed experiments -Doing many small, interactive experiments Need distributed experimental system, rather than centralized production supercomputer

Design philosophy Goals of DAS-1 and DAS-2: -Ease collaboration within ASCI -Ease software exchange -Ease systems management -Ease experimentation  Want a clean, laboratory-like system Keep DAS simple and homogeneous -Same OS, local network, CPU type everywhere -Single (replicated) user account file

Behind the screens …. Source: Tanenbaum (ASCI’97 conference)

DAS-1 ( ) VU (128)Amsterdam (24) Leiden (24)Delft (24) 6 Mb/s ATM Configuration 200 MHz Pentium Pro Myrinet interconnect BSDI => Redhat Linux

DAS-2 (2002-now) VU (72)Amsterdam (32) Leiden (32) Delft (32) SURFnet 1 Gb/s Utrecht (32) Configuration two 1 GHz Pentium-3s >= 1 GB memory GB disk Myrinet interconnect Redhat Enterprise Linux Globus 3.2 PBS => Sun Grid Engine

DAS accelerated research trend Cluster computing Distributed computing Grids and P2P Virtual laboratories

Examples cluster computing Communication protocols for Myrinet Parallel languages (Orca, Spar) Parallel applications -PILE: Parallel image processing -HIRLAM: Weather forecasting -Solving Awari (3500-year old game) GRAPE: N-body simulation hardware

Distributed supercomputing on DAS Parallel processing on multiple clusters Study non-trivially parallel applications Exploit hierarchical structure for locality optimizations -latency hiding, message combining, etc. Successful for many applications -E.g. Jem3D in ProActive [F. Huet, SC’04]

Example projects Albatross -Optimize algorithms for wide area execution MagPIe: -MPI collective communication for WANs Manta: distributed supercomputing in Java Dynamite: MPI checkpointing & migration ProActive (INRIA) Co-allocation/scheduling in multi-clusters Ensflow -Stochastic ocean flow model

Grid & P2P computing: using DAS-2 as part of larger heterogeneous grids Ibis: Java-centric grid computing Satin: divide-and-conquer on grids Zorilla: P2P distributed supercomputing KOALA: co-allocation of grid resources Globule: P2P system with adaptive replication CrossGrid: interactive simulation and visualization of a biomedical system

Management of comm. & computing Management of comm. & computing Management of comm. & computing Potential Generic part Potential Generic part Potential Generic part Application Specific Part Application Specific Part Application Specific Part Virtual Laboratory Application oriented services Grid Harness multi-domain distributed resources Virtual Laboratories

VL-e: Virtual Laboratory for e-Science project ( ) 40 M€ Dutch project (20 M€ from government) 2 experimental environments: -Proof of Concept: applications research -Rapid Prototyping (using DAS): computer science Research on: -Applications (biodiversity, bioinformatics, food informatics, telescience, physics) -Computer science tools for visualization, workflow, ontologies, data management, PSEs, grid computing

Outline History Impact on Dutch computer science research -Trend: cluster computing  distributed computing  Grids  Virtual laboratories Example research projects - Ibis, Satin, Zorilla Grid experiments on DAS-2, GridLab, Grid’5000 Future: DAS-3

The Ibis system Programming support for distributed supercomputing on heterogeneous grids -Fast RMI, group communication, object replication, d&c Use Java-centric approach + JVM technology -Inherently more portable than native compilation -Requires entire system to be written in pure Java -Optimized special-case solutions with native code Several external users: -ProActive, VU medical center, AMOLF, TU Darmstadt

Compiling/optimizing programs Java compiler bytecode rewriter JVM source bytecode Optimizations are done by bytecode rewriting -E.g. compiler-generated serialization Standard JavaIbis specific

Ibis Overview IPL RMISatin RepMI GMI MPJ ProActive TCPUDP P2P GMPanda Application Java Native Legend :

Satin: a parallel divide-and- conquer system on top of Ibis Divide-and-conquer is inherently hierarchical More general than master/worker Satin: Cilk-like primitives (spawn/sync) in Java Supports replicated shared objects with user- defined coherence semantics Supports malleability (nodes joining/leaving) and fault-tolerance (nodes crashing)

Zorilla: P2P supercomputing Fully distributed Java-based system for running parallel applications Uses P2P techniques Supports malleable (Satin) applications Uses locality-aware flood-scheduling algorithm

Running applic’s without Zorilla Deployment -Copy program and input files to all sites -Determine local job scheduling mechanism, write job submission scripts -Determine network setup of all clusters Running -Determine site and node availability -Submit application to the scheduler on each site -Monitor progress of application Clean up -Gather output and log files -Cancel remaining reservations -Remove program, input, output and log files from sites

Running applic’s with Zorilla Deployment (once) -Copy Zorilla to all sites -Determine local job scheduling mechanism, write job submission scripts -Determine network setup of all clusters -Submit Zorilla to local job schedulers Running and Clean up -Submit job to Zorilla system $ zubmit -j nqueens.jar -#w676 NQueens

Grid experiments DAS is ``ideal’’, laboratory-like environment for doing clean performance experiments GridLab testbed (incl. VU-DAS) is used for doing heterogeneous experiments Grid’5000 is used for large-scale experiments

Performance Ibis on wide-area DAS-2 (64 nodes) Cellular Automaton uses Ibis/IPL, the others use Satin.

GridLab testbed

Testbed sites

Experiences Grid testbeds are difficult to obtain Poor support for co-allocation Firewall problems everywhere Java indeed runs anywhere Divide-and-conquer parallelism can obtain high efficiencies (66-81%) on a grid -See [van Reeuwijk, Euro-Par 2005]

GridLab results ProgramsitesCPUsEfficiency Raytracer (Satin)54081 % SAT-solver (Satin)52888 % Compression (Satin)32267 % Cellular autom. (IPL)32266 % Efficiency normalized to single CPU type (1GHz P3)

Grid’5000 experiments Used Grid’5000 for -Nqueens challenge (2 nd Grid Plugtest) -Testing Satin’s shared objects -Large-scale Zorilla experiments Issues -No DNS-resolution for compute nodes -Using local IP addresses ( x.y) for routing -Setting up connections to nodes with multiple IP addresses -Unable to run on Grid’5000 and DAS-2 simultaneously

Grid’5000 results ProgramsitesCPUsEfficiency SAT solver % Traveling Salesman % VLSI routing % N-queens5960(~ 85 %) Satin programs (using shared objects) Running concurrently on clusters at Sophia-Antipolis, Bordeaux, Rennes, and Orsay

Zorilla on Grid’5000 N-Queens divide-and-conquer Java application Six Grid’5000 clusters 338 machines 676 processors 22-queens in 35 minutes

Processor allocation results

Comparison DAS +Ideal for speedup measurements (homogeneous) -Bad for heterogeneous or long-latency experiments GridLab testbed +Useful for heterogeneous experiments (e.g. Ibis) -Small-scale, unstable Grid’5000 +Excellent for large-scale experiments -Bad connectivity to outside world (DAS)

DAS - 3 (2006) Partners: -ASCI, Gigaport-NG/SURFnet, VL-e, MultimediaN Expected to be more heterogeneous Experiment with (nightly) production use DWDM backplane -Dedicated optical group of lambdas -Can allocate multiple 10 Gbit/s lambdas between sites

DAS-3DAS-3 CPU’s R R R R R NOC

StarPlane project Key idea: -Applications can dynamically allocate light paths -Applications can change the topology of the wide-area network, possibly even at sub-second timescale Challenge: how to integrate such a network infrastructure with (e-Science) applications? (Collaboration with Cees de Laat, Univ. of Amsterdam)

Conclusions DAS is a shared infrastructure for experimental computer science research It allows controlled (laboratory-like) grid experiments It accelerated the research trend - cluster computing  distributed computing  Grids  Virtual laboratories We want to use DAS as part of larger international grid experiments (e.g. with Grid’5000)

Education using DAS International (top)master program PDCS: Parallel and Distributed Computer Systems Organized by Andy Tanenbaum, Henri Bal, Maarten van Steen et al. See vrije Universiteit

Acknowledgements Grid’5000 Andy Tanenbaum Bob Hertzberger Henk Sips Lex Wolters Dick Epema Cees de Laat Aad van der Steen More info: Rob van Nieuwpoort Jason Maassen Kees Verstoep Gosia Wrzesinska Niels Drost Thilo Kielmann Ceriel Jacobs Many others