Research Infrastructure Simon Hood Research Infrastructure Coordinator

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

Northgrid Status Alessandra Forti Gridpp24 RHUL 15 April 2010.
University of St Andrews School of Computer Science Experiences with a Private Cloud St Andrews Cloud Computing co-laboratory James W. Smith Ali Khajeh-Hosseini.
Introducing e-SciNet Clare Gryce UCL. Current Status E-Scinet (The e-Science Network) is up and running –Aim: to develop and disseminate best practice.
© University of Reading David Spence 20 April 2014 e-Research: Activities and Needs.
Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
UCL HEP Computing Status HEPSYSMAN, RAL,
Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
An Introduction to Gauss Paul D. Baines University of California, Davis November 20 th 2012.
2014 IMPROVEMENTS St. Thecla Computer System Upgrade.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
School of Computing Clemson University
Technology Steering Group January 31, 2007 Academic Affairs Technology Steering Group February 13, 2008.
Technology Steering Group January 31, 2007 Academic Affairs Technology Steering Group February 13, 2008.
Introduction to RCC for Intro to MRI 2014 July 25, 2014.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
SUMMER VACATION SCHOLARSHIP | IM&T Scientific Computing in the Cloud.
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
LARGE SCALE DEPLOYMENT OF DAP AND DTS Rob Kooper Jay Alemeda Volodymyr Kindratenko.
IT in the 12 GeV Era Roy Whitney, CIO May 31, 2013 Jefferson Lab User Group Annual Meeting.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
WSV207. Cluster Public Cloud Servers On-Premises Servers Desktop Workstations Application Logic.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
OpenNebula: Experience at SZTAKI Peter Kacsuk, Sandor Acs, Mark Gergely, Jozsef Kovacs MTA SZTAKI EGI CF Helsinki.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Gang Chen, Institute of High Energy Physics Feb. 27, 2012, CHAIN workshop,Taipei Co-ordination & Harmonisation of Advanced e-Infrastructures Research Infrastructures.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
High Performance Computing (HPC)
Compute and Storage For the Farm at Jlab
What is HPC? High Performance Computing (HPC)
HPC usage and software packages
Belle II Physics Analysis Center at TIFR
Working With Azure Batch AI
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
National Center for Genome Analysis Support
CommLab PC Cluster (Ubuntu OS version)
Developments in Batch and the Grid
Manchester HEP group Network, Servers, Desktop, Laptops, and What Sabah Has Been Doing Sabah Salih.
Research Data Archive - technology
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Scientific Computing At Jefferson Lab
Shared Research Computing Policy Advisory Committee (SRCPAC)
Advanced Computing Facility Introduction
Footer.
High Performance Computing in Bioinformatics
Introduction to High Performance Computing Using Sapelo2 at GACRC
H2020 EU PROJECT | Topic SC1-DTH | GA:
Introduction to research computing using Condor
Presentation transcript:

Research Infrastructure Simon Hood Research Infrastructure Coordinator

Michael Smith (FLS) MIB? MHS? Materials (EPS)? CSF Job Queue Backend nodes (compute) iCSF Backend LVS nodes (VMs, GPUs) RQ Job Queue Backend nodes (compute) Zreck Backend ET nodes (FPGA, GPU, Phi) 20 Gb/s RDS - Isilon Home dirs Shared areas Router ACLs SSH X2GO NX SSHFS Research Virt. Desktop Service Cmd-line “mounts” Firewall Router ACLs /27 Campus only /16 Off campus On campus RVMS Research VMs Campus CIR Ecosystem Router ACLs 20 Gb/s

Ecosystem Workflow 2. Submit compute job. EG: long running parallel high memory heat/stress analysis from home. iCSF RVDS CSF RDS 3. Check on compute job. EG: while away at conference in Barcelona. Submit other jobs. SSH CSF RDS 4. Analyse results EG: In application GUI on laptop in hotel & back in office. iCSF RVDS RDS 5. Publish results. EG: Front-end Web Server running on RVMS accessing Isilon share. RVMS RDS 1. Input preparation. EG: upload data to RDS, set job parameter in application GUI, in office on campus. iCSF RVDS SSHFS RDS

CIR Stats CSF – £1.3m academic contribution since Dec CPU cores – £175k more lined up (Jan 2014?) – Awaiting outcome of some big bids…Kilburn??? Storage – Isilon – 500 PB per year – Current: 120 TB for each faculty – going fast! Network – July: £1.5m on Cisco kit – 80 Gb core, 10Gb buildings People – Pen, George, Simon

Recent and Current Work Redqueen – Summer: RGF-funded refresh of 50% of cluster – Integration with Isilon (RDN) CSF (mostly batch compute) – Summer: £300k procurement – RDN: moving all home-dirs to Isilon (keep local Lustre-based scratch) Gateways – SSH and SSHFS/SFTP – Research Virtual Desktop Service: NX, X2GO New Clusters – Incline/iCSF: interactive compute – Zreck: GPGPUs, Xeon Phi, FPGA, … RDN (20 Gb) – CSF, Redqueen, Incline/iCSF, Zreck – Michael Smith (FLS)

Thankyou! questions to me: