SCARIe: using StarPlane and DAS-3 Paola Grosso Damien Marchel Cees de Laat SNE group - UvA.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

Photonic TeraStream and ODIN By Jeremy Weinberger The iCAIR iGRID2002 Demonstration Shows How Global Applications Can Use Intelligent Signaling to Provision.
DAS 3 and StarPlane have Landed Architecture, Status... Freek Dijkstra.
SCARIe: Realtime software correlation Nico Kruithof, Damien Marchal.
A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
Resource Brokering: Your Ticket Into NetherLight Paola Grosso Jeroen van der Ham Cees de Laat UvA - AIR group.
StarPlane & LightHouse Cees de Laat SURFnet EU University of Amsterdam SARA TI TNONCF.
The First 16 Years of the Distributed ASCI Supercomputer Henri Bal Vrije Universiteit Amsterdam COMMIT/
Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences DAS-1 DAS-2 DAS-3.
Opening Workshop DAS-2 (Distributed ASCI Supercomputer 2) Project vrije Universiteit.
Lightpaths: why, what (and how!) Bram Peeters, SURFnet Network Services SNE College, 21 st of March, Utrecht.
The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
WS-VLAM Introduction presentation WS-VLAM Workflow Engine System and Network Engineering group Institute of informatics University of Amsterdam.
Henri Bal Vrije Universiteit Amsterdam vrije Universiteit.
Optical networking research in Amsterdam Paola Grosso UvA - AIR group.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
Feb On*Vector Workshop Semantic Web for Hybrid Networks Dr. Paola Grosso SNE group University of Amsterdam The Netherlands.
HELICS Petteri Johansson & Ilkka Uuhiniemi. HELICS COW –AMD Athlon MP 1.4Ghz –512 (2 in same computing node) –35 at top500.org –Linpack Benchmark 825.
Inter-Operating Grids through Delegated MatchMaking Alexandru Iosup, Dick Epema, Hashim Mohamed,Mathieu Jan, Ozan Sonmez 3 rd Grid Initiative Summer School,
Virtual Laboratory for e-Science (VL-e) Henri Bal Department of Computer Science Vrije Universiteit Amsterdam vrije Universiteit.
Oct RoN meetingResource Brokering Resource Brokering and Management: making use of RDF Paola Grosso Jeroen van der Ham.
Grid Adventures on DAS, GridLab and Grid'5000 Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
S t a r P l a n e S t a r P l a n e Application Specific Management of Photonic Networks Li XU & JP Velders SNE group meeting
May TNC2007 Network Description Language - Semantic Web for Hybrid Networks Network Description Language: Semantic Web for Hybrid Networks Paola.
4 december, DAS3-G5K Interconnection Workshop Hosted by the VU (Thilo Kielmann), Amsterdam Dick Epema (TUD) and Franck Cappello (INRIA) Parallel.
May TERENA workshopStarPlane StarPlane: Application Specific Management of Photonic Networks Paola Grosso SNE group - UvA.
1 DWDM-RAM: Enabling Grid Services with Dynamic Optical Networks S. Figueira, S. Naiksatam, H. Cohen, D. Cutrell, P. Daspit, D. Gutierrez, D. Hoang, T.
The Distributed ASCI Supercomputer (DAS) project Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences.
Oct RoN meetingiGrid2005: a lambda networking facility Paola Grosso AIR/ UvA.
4 december, The Distributed ASCI Supercomputer The third generation Dick Epema (TUD) (with many slides from Henri Bal) Parallel and Distributed.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Cloud Computing for the Enterprise November 18th, This work is licensed under a Creative Commons.
Chapter 4 Local Area Networks. Layer 2: The Datalink Layer The datalink layer provides point-to- point connectivity between devices over the physical.
DISTRIBUTED COMPUTING
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Grid Workload Management Massimo Sgaravatto INFN Padova.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
GLIF Infrastructure Kees Neggers SURFnet SC2004 Pittsburgh, PA 12 November 2004.
Techs in Paradise 2004, Honolulu / Lambda Networking BOF / Jan 27 NetherLight day-to-day experience APAN lambda networking BOF Erik Radius Manager Network.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
Erik Radius Manager Network Services SURFnet, The Netherlands Joint Techs Workshop Columbus, OH - July 20, 2004 GigaPort Next Generation Network & SURFnet6.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
Advances Toward Economic and Efficient Terabit LANs and WANs Cees de Laat Advanced Internet Research Group (AIRG) University of Amsterdam.
Mihai Lucian Cristea, on behalf of SCARIe team University of Amsterdam TERENA CONFERENCE ‘10, Vilnius, 1 June 2010.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Wide-Area Parallel Computing in Java Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences vrije Universiteit.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
1 Revision to DOE proposal Resource Optimization in Hybrid Core Networks with 100G Links Original submission: April 30, 2009 Date: May 4, 2009 PI: Malathi.
Parallel Computing on Wide-Area Clusters: the Albatross Project Aske Plaat Thilo Kielmann Jason Maassen Rob van Nieuwpoort Ronald Veldema Vrije Universiteit.
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
Optical/Photonic Networking and Grid Integration Cees de Laat Advanced Internet Research Group (AIRG) UvA.
GGF 17 - May, 11th 2006 FI-RG: Firewall Issues Overview Document update and discussion The “Firewall Issues Overview” document.
Session 9A, 25 October 2007 eChallenges e-2007 Copyright 2007 StarPlane: an Application Controlled Photonic Network Dr. Paola Grosso.
An evolutionary approach to G-MPLS ensuring a smooth migration of legacy networks Ben Martens Alcatel USA.
California Institute of Technology
GWE Core Grid Wizard Enterprise (
StarPlane: Application Specific Management of Photonic Networks
Grid Computing.
The University of Adelaide, School of Computer Science
SCARIe: eVLBI Software Correlation Over Dynamic Lambda Grids
Optical communications & networking - an Overview
Cluster Computers.
Presentation transcript:

SCARIe: using StarPlane and DAS-3 Paola Grosso Damien Marchel Cees de Laat SNE group - UvA

Outline The goal of my presentation: to give an overview of the DAS-3 and StarPlane architecture, their current status and outline which use SCARIe research can do on these infrastructure. Consequently the outline of this talk: -architecture of DAS-3 -architecture of StarPlane -current status of DAS-3 -current status of StarPlane -SCARIe use of DAS-3 and StarPlane

Some basics… DAS-3 is the ASCI distributed supercomputer, or better said the Distributed ASCI Supercomputer. DAS-3 is the third generation of DAS. StarPlane is an NWO-funded project. StarPlane goal is to develop the middleware for application controlled photonic networks. StarPlane performs its research on a portion of SURFnet6.

DAS-3: goals The goal of DAS is to provide a common computational infrastructure for researchers within ASCI, who work on various aspects of parallel and distributed systems, including communication substrates, programming environments, and applications. [Quoting Henri Bal - VU] Parallel processing on multiple clusters Study non-trivially parallel applications Exploit hierarchical structure for locality optimizations –latency hiding, message combining, etc.

DAS-3: cluster architecture Five clusters in four locations: UvA, VU, Leiden and Delft. Two clusters at the UvA. Key components: -1 head node -[] cluster nodes -Local/university interconnect -Fast interconnect -Photonic interconnect

DAS-3: clusters setup LUTUDUvAUvA-MNVU TOTALS Head * storage10TB5TB2TB 10TB 29TB * CPU2x2.4GHz DC 2x2.2GHz DC 2x2.4GHz DC 46GHz * memory16GB 8GB16GB8GB 64GB * Myri 10G1111 * 10GE11111 Compute (1) * storage400GB250GB 2x250GB250GB 89TB * CPU2x2.6GHz2x2.4GHz2x2.2GHz DC2x2.4GHz2x2.4GHz DC 1.9THz * memory4GB 1084GB * Myri 10G1111 Myrinet * 10G ports33 (7) (2) 203 * 10GE ports Nortel * 1GE ports32 (16)136 (8)40 (8)46 (2)85 (11) 542 * 10GE ports1 (1)9 (3)221 (1) 15

DAS-3: status As of today. All sites: - Operational with cluster and head nodes and bridge nodes installed. [Bridge nodes are used to connect to the photonic network, while the performance of the Myrinet cards is under test] -User accounts available. -Jobs can run on individual clusters not between clusters. Grid software: -SGE is the preferred scheduler -GLOBUS and KOALA are installed but not operational -MPICH-G2 is not installed Myrinet: -Switchcards installed but not operational -Tests of Myrinet-Ethernet bridging

DAS-3: photonic network The LAN connection to the University network, via Ethernet switches. The WAN/StarPlane connection to CPL, via Ethernet “bridging” card in a Myrinet switch. Connection from each site to: - OADM (fixed) equipment -Optical Add Drop Multiplexer - WSS - Wavelength Selectable Switches- in Amsterdam.

StarPlane: goals StarPlane is a NWO funded project with major contributions from SURFnet and NORTEL. StarPlane will use the physical infrastructure provided by SURFnet6 and the distributed supercomputer DAS-3. The vision is to allow part of the photonic network infrastructure of SURFnet6 to be manipulated by Grid applications to optimize the performance of specific e-Science applications. The novelty: to give flexibility directly to the applications by allowing them to choose the logical topology in real time, ultimately with subsecond lambda switching times.

StarPlane: WSS WSS will allow us to redirect a selected input color to the output fiber This allows us to flexibly reconfigure the network according to the application demands. Goal of StarPlane is sub-second switching, and topology reconfiguration.

Topology examples StarPlane: topology changes

StarPlane and NDL While on topologies. SNE group is working on NDL - Network Description Language. NDL is an RDF data model, based on idea of Semantic Web, for network topology descriptions. In StarPlane we are researching use of NDL for topology exchange and topology requests from clients.

StarPlane: status (1) First light between sites over the CPL infrastructure. Monitoring of connections being worked on. But no dynamic control over lightpaths yet.

StarPlane: status(2) Interaction between the management plane and DRAC over a testbed.

Conclusions References: DAS-3 website: StarPlane website: NDL website Questions? Now for the discussion…

For discussion Use of DAS-3 and StarPlane for SCARIe software correlation. 1.Does the DAS-3 supercomputer provide a suitable environment for software correlation? Computing power and number of nodes is adequate? Grid software suited to requirements? 2. Data needs to be on the DAS-3 cluster. How do we move the data in real-time to the cluster? Or we just focus on ust offline correlation? Do we have a lightpath connection to JIVE? From where to where? 3. Topology request. What are the optimal photonic topologies for software correlations?