VL-e PoC: What it is and what it isn’t Jan Just Keijser VL-e P4 Scaling and Validation Team TU Delft Grid Meeting, December 11th, 2008.

Slides:



Advertisements
Similar presentations
R. Belleman, 22 juni 2004VL-e technical overview VL-e toolkit development cycle Robert Belleman
Advertisements

7 april SP3.1: High-Performance Distributed Computing The KOALA grid scheduler and the Ibis Java-centric grid middleware Dick Epema Catalin Dumitrescu,
Service Orientation Considerations PoC R1 David Groep, Scaling and Validation Programme.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Infrastructure overview Arnold Meijster &
Virtual Laboratory for e-Science (VL-e) Henri Bal Department of Computer Science Vrije Universiteit Amsterdam vrije Universiteit.
VL-e Proof of Concept and the VL-e Integration Team David Groep Maurice Bouwhuis VL-e SP Plenary Meeting, November 1 st, 2005.
Henri Bal Vrije Universiteit Amsterdam vrije Universiteit.
VL-e PoC Architecture and the VL-e Integration Team David Groep VL-e work shop, April 7 th, 2006.
VL-e PoC Introduction Maurice Bouwhuis VL-e work shop, April 7 th, 2006.
Presented by Scalable Systems Software Project Al Geist Computer Science Research Group Computer Science and Mathematics Division Research supported by.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
SCD FIFE Workshop - GlideinWMS Overview GlideinWMS Overview FIFE Workshop (June 04, 2013) - Parag Mhashilkar Why GlideinWMS? GlideinWMS Architecture Summary.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
ICP ICT and Company Practise College 1 Dinsdag 3 april 2007 Geleyn Meijer.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
VL-e PoC Distribution Installation and use of the PoC Jan Just Keijser VL-e work shop, April 7 th, 2006.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
E-science in the Netherlands Maria Heijne TU Delft Library Director / Chair Consortium of University Libraries and National Library.
07:44:46Service Oriented Cyberinfrastructure Lab, Introduction to BOINC By: Andrew J Younge
Using the VL-E Proof of Concept Environment Connecting Users to the e-Science Infrastructure David Groep, NIKHEF.
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
The Grid computing Presented by:- Mohamad Shalaby.
The DutchGrid Platform – An Overview – 1 DutchGrid today and tomorrow David Groep, NIKHEF The DutchGrid Platform Large-scale Distributed Computing.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 Joining the NGS Stephen Pickles Technical Director, GOSC GOSC face-to-face meeting, NeSC, 28/10/2004.
The Scaling and Validation Programme PoC David Groep & vle-pfour-team VL-e Workshop NIKHEF SARA LogicaCMG IBM.
ICT infrastructure for Science: e-Science developments Henri Bal Vrije Universiteit Amsterdam.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
EGEE-II INFSO-RI Enabling Grids for E-sciencE The GILDA training infrastructure.
PoC Induction 19-April VBrowser (VL-e Toolkit) The single point of access to the grid  Medical use case: functional MRI (fMRI)  VBrowser design  VBrowser.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
Scaling and Validation Programme David Groep & vle-pfour-team VL-e SP Meeting NIKHEF SARA LogicaCMG IBM.
Grid and VOs. Grid from feet The GRID: networked data processing centres and ”middleware” software as the “glue” of resources. Researchers perform.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
The VL-e Proof of Concept Environment & The VL-e PoC Installer Jan Just Keijser System Integrator P4 Team NIKHEF.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
DataGrid is a project funded by the European Commission under contract IST rd EU Review – 19-20/02/2004 The EU DataGrid Project Three years.
Research organization technology David Groep, October 2007.
Dave Newbold, University of Bristol14/8/2001 Testbed 1 What is it? First deployment of DataGrid middleware tools The place where we find out if it all.
The Great Migration: From Pacman to RPMs Alain Roy OSG Software Coordinator.
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
Virtual Laboratory Amsterdam L.O. (Bob) Hertzberger Computer Architecture and Parallel Systems Group Department of Computer Science Universiteit van Amsterdam.
BEgrid Seminar 27 October 2006.
BiG Grid Communities Update
ATLAS Cloud Operations
Ian Bird GDB Meeting CERN 9 September 2003
MaterialsHub - A hub for computational materials science and tools.
Статус ГРИД-кластера ИЯФ СО РАН.
GRID COMPUTING PRESENTED BY : Richa Chaudhary.
Introduction to Cloud Computing
VL-e PoC Architecture and the VL-e Integration Team
gLite The EGEE Middleware Distribution
Presentation transcript:

VL-e PoC: What it is and what it isn’t Jan Just Keijser VL-e P4 Scaling and Validation Team TU Delft Grid Meeting, December 11th, 2008

The VL-e vision (aka “Bob's stoomboot”)

What is the VL-e PoC Environment? (from The Proof-of-Concept Environment (PoC) is the shared, common environment for e-Science of the Virtual Laboratory for e-Science. In the PoC, the different tools and services used by and provided by the project are available, and bound together in a service oriented approach. The PoC covers three distinct areas: A software Distribution, to be installed by any-one in VL-e interested in participating in the PoC A PoC Environment, the ensemble of systems that run the current PoC Distribution The PoC Central Facilities, those systems running the PoC Distribution that are centrally managed by the P4 Scaling and Validation Programme on behalf of the Project

The VL-e PoC R3 Distribution VL-e PoC R3 Contents: Scientific Linux 4 32bit gLite 3.1 Sun Java JDK 1.5.0_16 Plus fsl 4.0gat 1.8.2globus-toolkit graphviz 2.18ibis 1.4itk 3.4 javagat 1.7.1kepler-1.0.0rc1lam lucene 2.3.1MatlabMPI 1.2Mesa3D modules 3.2.3mpitb mricro octave paraview 3.2.1swi-prolog R 2.6.2Rmpi-0.5sesame-client SRB client 3.4.2taverna 1.7.1vlet/vbrowser vtk & 5.0.4weka Contributed packages mono (C#) see for full list

The VL-e PoC Environment Application development NL-Grid production cluster Central mass-storage facilities + SURFnet Initial compute platform Stable, reliable, tested Cert. releases Grid MW & VL-software VL-e Proof of Concept Environment VL-e Rapid Prototyping Environment DAS-2/3, local resources VL-e Certification Environment NL-Grid Fabric Research Cluster Test & Cert. Grid MW & VL-software Compatibility Flexible, test environment Environments Usage Characteristics Virtual Lab. rapid prototyping (interactive simulation) Flexible, ‘unstable’ PoC Release nRelease Candidate n+1 Developers environment is a shared, common environment, where different tools and services are both used and provided by the VL-e community PoC / BigGrid CTBRPE (DAS-3)

The VL-e PoC Central Facility topology

● Rapid Prototyping Environment (DAS-3)  System load is usually low  Users have a single userid + home directory  Some people say it's not a grid, but a Cluster of Clusters ● Proof-of-Concept Environment  Typically high load (90+ %)  Users get access using X509 certificates and are assigned pool accounts.  Pool accounts are unique to each cluster and do not share home directories, not even within a cluster  Group rights are handled using Virtual Organisations and VOMS  No direct user logins allowed  No inbound connectivity (listeners) on worker nodes Differences between RPE and PoC

● Scientific Linux/Centos 4 32bit ● Migration to Centos 5 64bit in 2009 ● Runs gLite grid middleware ● Usually Torque/PBS as the batch system ● Usually not allowed to run software on the cluster headnode ● Runs PoC software and lots of non-PoC software ● Job allocation is based on 1 job per core ● Typically 2GB of RAM per job ● Multicore/MPI jobs not well supported (yet) Typical PoC cluster characteristics

1.VO Software Group area  VO has full control  Must be installed per cluster  Not supported at all by the P4 team or site administrators 2.VL-e 'contrib track' Contributed packages are not part of the regular PoC release cycle Certified only to pass some basic installation tests Not supported by the P4 team, but exclusively by the original contributor Most sites will install contributed packages, but no guarantees See for detailshttp://poc.vl-e.nl/distribution/contrib/ How to get software to run on the PoC

3.Part of the next PoC release  Nirwana/Walhalla/Heaven?  Must pass 'contrib track' first  Must be stable, reliable and tested software  PoC Releases are scheduled every ~9 months  NO major updates possible in between (except critical security fixes) How to get software to run on the PoC

3.Part of the next PoC release  Nirwana/Walhalla/Heaven?  Must pass 'contrib track' first  Must be stable, reliable and tested software  PoC Releases are scheduled every ~9 months  NO major updates possible in between (except critical security fixes) There are some good points:  Eternal glory  Will be installed at all VL-e PoC clusters  Supported by VL-e P4 Scaling and Validation team How to get software to run on the PoC

VL-e User Applications seen on the PoC Infrastructure Data Intensive Science:  DANS (KNAW/KB)  Sciamachy (KNMI)  eNMR (UU) Food Informatics:  Bitterbase web service (Unilever) Medical Diagnosis & Imaging:  fMRI, Vbrowser, Moteur (AMC)  JavaGAT (VUmc) Bio Informatics

Questions?