SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

NGS computation services: API's,
NGS computation services: APIs and.
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
Peter Berrisford RAL – Data Management Group SRB Services.
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Computing Infrastructure
Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
Elastic HPC Extending the Cluster into the Cloud Ruth Lynch, Research IT Service 13 th November 2009.
White Rose Grid Infrastructure Overview Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield
Finnish Material Sciences Grid (M-grid) Arto Teräs Nordic-Sgi Meeting October 28, 2004.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
Beowulf Supercomputer System Lee, Jung won CS843.
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
GRID Activities at ESAC Science Archives and Computer Engineering Unit Science Operations Department ESA/ESAC – Madrid, Spain.
DAISY Pipeline in NLB Functional and technical requirements.
The UK National Grid Service Using the NGS. Outline NGS Background Getting Certificates Acceptable usage policies Joining VO’s What resources will be.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
John Kewley e-Science Centre GIS and Grid Computing Workshop 13 th September 2005, Leeds Grid Middleware and GROWL John Kewley
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Astronomical GRID Applications at ESAC Science Archives and Computer Engineering Unit Science Operations Department ESA/ESAC.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1 Virtualization Services. 2 Cloud Hosting –Shared Virtual Servers –Dedicated Servers Managed Server Options Multiple Access Methods –EarthLink Business.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Integrating HPC and the Grid – the STFC experience Matthew Viljoen, STFC RAL EGEE 08 Istanbul.
Virtualization Concept. Virtualization  Real: it exists, you can see it.  Transparent: it exists, you cannot see it  Virtual: it does not exist, you.
HPC at IISER Pune Neet Deo System Administrator
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Chapter 4 COB 204. What do you need to know about hardware? 
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Computing Labs CL5 / CL6 Multi-/Many-Core Programming with Intel Xeon Phi Coprocessors Rogério Iope São Paulo State University (UNESP)
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
Lab System Environment
Production Grids Mike Mineter NeSC-TOE. EU project: RIO31844-OMII-EUROPE 2 Production Grids - examples 1.EGEE: Enabling Grids for e-Science 2.National.
NML Bioinformatics Service— Licensed Bioinformatics Tools High-throughput Data Analysis Literature Study Data Mining Functional Genomics Analysis Vector.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Bioinformatics Core Facility Guglielmo Roma January 2011.
NGS Portal.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Computational Research in the Battelle Center for Mathmatical medicine.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
GraDS MacroGrid Carl Kesselman USC/Information Sciences Institute.
Cloud Computing project NSYSU Sec. 1 Demo. NSYSU EE IT_LAB2 Outline  Our system’s architecture  Flow chart of the hadoop’s job(web crawler) working.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
The National Grid Service Mike Mineter
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Operational and Application Experiences with the Infiniband Environment Sharon Brunett Caltech May 1, 2007.
The UK National Grid Service Andrew Richards – CCLRC, RAL.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Brief introduction about “Grid at LNS”
Chapter 5 Operating Systems.
Low-Cost High-Performance Computing Via Consumer GPUs
White Rose Grid Infrastructure Overview
UK Grid: Moving from Research to Production
Grid infrastructure development: current state
CRESCO Project: Salvatore Raia
The National Grid Service
HYCOM CONSORTIUM Data and Product Servers
The National Grid Service Mike Mineter NeSC-TOE
Presentation transcript:

SCARF Duncan Tooke RAL HPCSG

Overview What is SCARF? Hardware & OS Management Software Users Future

What is SCARF? Production - service level definitions, professionally managed, user Support, application Support, integrated with other e- Science services. Grid - access should be seamless, same technology as National Grid Service (NGS) thus interoperable, integrated into CCLRC Single Sign on Project.

Hardware & OS 128 x Dual AMD 248 (2 x 1 core), 8GB RAM, Myrinet (A) 18 x Dual AMD 275 (2 x 2 core), 8GB RAM, Myrinet (B) 4 x Quad AMD 275 (4 x 2 core), 32GB RAM, Myrinet (B) 52 x Dual AMD 280 (2 x 2 core), 8GB RAM, Myrinet (B) 56 x Dual AMD 275 (2 x 2 core) 4GB RAM, no Myrinet 4 x File servers with 4TB RAID 5 x File servers on 70TB SAN 10GB network link (shared) RHEL 4 Globus 4 pre-web (1 system administrator)

Management Heterogeneous environment Need to run as a single system SCALI-Manage with single slave kickstart + common config RPM SCALI-MPI for single MPI binaries SAN + LDAP + Automount Node SAN LDAP Management

Software GCC, PGI, Intel C, C++, Fortran, Java MPI & Parallel debugger User generated code Popular scientific packages, e.g. GAMESS, Gaussian

Users 87 registered users 11 facilities 1.5 million CPU hours in the last 12 months Most are developers or experienced HPC Just starting to get Grid adopters

The future… Continued expansion enabled by infrastructure Development of Grid tools, e.g. automated parameter sweeping User interfaces NGS affiliation