Activities in the ANU Supercomputer Facility ANUSF

Slides:



Advertisements
Similar presentations
The Data Information and Management System (DIMS)
Advertisements

Client/Server Computing (the wave of the future) Rajkumar Buyya School of Computer Science & Software Engineering Monash University Melbourne, Australia.
University of Illinois at Chicago The Future of STAR TAP: Enabling e-Science Research Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic.
Australian Partnership for Advanced Computing Partners: Australian Centre for Advanced Computing and Communications (ac3) in New South Wales CSIRO Queensland.
The Australian Virtual Observatory e-Science Meeting School of Physics, March 2003 David Barnes.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
© University of Reading David Spence 20 April 2014 e-Research: Activities and Needs.
Supercomputing Institute for Advanced Computational Research © 2009 Regents of the University of Minnesota. All rights reserved. The Minnesota Supercomputing.
Integrated Environment for Computational Chemistry on the APAC Grid Dr. Vladislav Vassiliev Supercomputer Facility, The Australian National University,
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
Searching for correlations in global environmental noise Karl Wette, Susan Scott and Antony Searle ACIGA Data Analysis Centre for Gravitational Physics.
Is 'Designing' Cyberinfrastructure - or, Even, Defining It - Possible? Peter A. Freeman National Science Foundation January 29, 2007 The views expressed.
5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001.
1 The Australian Partnership for Sustainable Repositories Margaret Henty Digital Futures Industry Briefing November 8, 2006.
National Computational Science Alliance Boston University Access Grid Conference Facility at Boston University Jennifer Teig von Hoffman.
Aus-VO: Progress in the Australian Virtual Observatory Tara Murphy Australia Telescope National Facility.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
The Great White SHARCNET. SHARCNET: Building an Environment to Foster Computational Science Shared Hierarchical Academic Research Computing Network.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
DAY TO DAY USAGE OF THE NETWORK for academic and administrative support (How we make it work) Presented by: Donnie Mize, Network Manager, FTCC Wanda Jones,
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
© 2008 Pittsburgh Supercomputing Center Tour Your Future The Girls, Math & Science Partnership Pittsburgh Supercomputing Center Computer Network Engineering.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Report on CSU HPC (High-Performance Computing) Study Ricky Yu–Kwong Kwok Co-Chair, Research Advisory Committee ISTeC August 18,
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
RENCI’s BEN (Breakable Experimental Network) Chris Heermann
The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
Glenn MoloneyThe Australian National Grid ProgramEGEE'06, Geneva, The Australian National Grid Program “providing advanced computing, information.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
11/15/04PittGrid1 PittGrid: Campus-Wide Computing Environment Hassan Karimi School of Information Sciences Ralph Roskies Pittsburgh Supercomputing Center.
ICC Module 3 Lesson 2 – Memory Hierarchies 1 / 6 © 2015 Ph. Janson Information, Computing & Communication Memory Hierarchies – Clip 2 – Concept School.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
April 23, 2002 Parallel Programming Techniques Intro to PSC Tom Maiden
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Computing Strategies. A computing strategy should identify – the hardware, – the software, – Internet services, and – the network connectivity needed.
“iVEC is increasing Western Australia’s innovative capacity and economic development by encouraging and supporting the exploration, evolution and exploitation.
SAN DIEGO SUPERCOMPUTER CENTER SDSC Resource Partner Summary March, 2009.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Advanced Computing and Grid Infrastructure in Australia ISGC April 2005 Taiwan John O’Callaghan Executive Director Australian Partnership for.
Scientific Computing At Jefferson Lab
CASTOR: CERN’s data management system
Guinness Book of Records 2004
Lee Lueking D0RACE January 17, 2002
Introduction to RDS Datasets
Data Management Components for a Research Data Archive
Presentation transcript:

Activities in the ANU Supercomputer Facility ANUSF

History ANUSF established 1987 to support large scale computational projects established first Visualization Lab in Australia – VizLab acquired first large scale mass data storage system in Australian university

APAC Since 2001 operated & supported the National Facility of Australian Partnership for Advanced Computing - APAC HP Alphaserver - was 31 st fastest in world when installed. 516 processors +152 proc linux cluster

APAC currently out to tender for $12.5M replacement system

Mass Data Storage System MDSS STK Powderhorn tape silo 1.2 Petabytes potential capacity, 6000 tapes 70 Mbytes/s tape drives, 200 Gbytes/tape 5 Tbytes fast disk cache SAM-FS hierarchical storage management Connected to APAC systems and GrangeNet Small ‘off-site’ silo being installed in Chancelry

Mass Data Storage System Storage for computational results Storage for experimental results, eg. MACHO project GrangeNet and APAC Grid projects eg. PARADESIC, Belle, ACIGA APAC Data Projects of “National Significance” - humanities, social sciences

Activities Support of systems Academic Consultants - high- level support to users - algorithm, programming advice, training, data storage Vizlab - help researchers with difficult data visualizations, VR, presentations etc Fujitsu Chemistry project - 15 years

ANUSF’s APAC Activities National Facility APAC Grid program - –Internet Futures group, GrangeNet program –Leading national data grid program, involvement in 6 other projects –7 staff in ANUSF led by Markus Buchhorn, other ANU staff contributing APAC CT&T Program –Tools & techniques - national program led by Ben Evans Other ANU involvement APAC EOT Program - Steve Roberts, SMS

Further information Levels 3 & 4 Huxley Building