CFI 2004 UW A quick overview with lots of time for Q&A and exploration.

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

Condor use in Department of Computing, Imperial College Stephen M c Gough, David McBride London e-Science Centre.
Copyright GeneGo CONFIDENTIAL »« MetaCore TM (System requirements and installation) Systems Biology for Drug Discovery.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Job Submission on WestGrid Feb on Access Grid.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
December 8 & 9, 2005, Austin, TX SURA Cyberinfrastructure Workshop Series: Grid Technology: The Rough Guide Configuring Resources for the Grid Jerry Perez.
Introduction to HPC resources for BCB 660 Nirav Merchant
Sun Grid Engine. Grids Grids are collections of resources made available to customers. Compute grids make cycles available to customers from an access.
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Kento Aida, Tokyo Institute of Technology Grid Challenge - programming competition on the Grid - Kento Aida Tokyo Institute of Technology 22nd APAN Meeting.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
O.S.C.A.R. Cluster Installation. O.S.C.A.R O.S.C.A.R. Open Source Cluster Application Resource Latest Version: 2.2 ( March, 2003 )
How to get started on cees Mandy SEP Style. Resources Cees-clusters SEP-reserved disk20TB SEP reserved node35 (currently 25) Default max node149 (8 cores.
CCPR Workshop Introduction to the Cluster July 13, 2006.
Summary of Processes Rutt – offline.recon – memory bound – COMSOL – memory bound – mcDESPOT and quantitative mapping – CPU bound, voxelwise parallel Zeineh.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Cluster Software Overview
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Using the Weizmann Cluster Nov Overview Weizmann Cluster Connection Basics Getting a Desktop View Working on cluster machines GPU For many more.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
Virtualization One computer can do the job of multiple computers, by sharing the resources of a single computer across multiple environments. Turning hardware.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Grid Computing: An Overview and Tutorial Kenny Daily BIT Presentation 22/09/2016.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Brief introduction about “Grid at LNS”
Welcome to Indiana University Clusters
Matt Lemons Nate Mayotte
Welcome to Indiana University Clusters
Cluster / Grid Status Update
Installation and Configuration
GWE Core Grid Wizard Enterprise (
Glasgow Site Report (Group Computing)
CommLab PC Cluster (Ubuntu OS version)
BIMSB Bioinformatics Coordination
PES Lessons learned from large scale LSF scalability tests
Welcome to our Nuclear Physics Computing System
NCSA Supercluster Administration
Design Unit 26 Design a small or home office network
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
An introduction to the Linux environment v
Sun Grid Engine.
Introduction to High Performance Computing Using Sapelo2 at GACRC
The Neuronix HPC Cluster:
Chapter-1 Computer is an advanced electronic device that takes raw data as an input from the user and processes it under the control of a set of instructions.
System Administration (LTAT )
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

CFI 2004 UW A quick overview with lots of time for Q&A and exploration.

Q: What is a cluster? A: A group of machines that can work together to produce results.

?

?

Characteristics  Group of machines  Not necessarily homogeneous  Common set of users  Some sort of batch system

Why do I want one? Ideally your jobs are able to be parallelized and do not require allocation of large chunks of memory.

When do I not want one? When you need large chunks of memory, or when you only need a single fast CPU

What do we have?  marroo - 21 nodes  shiraz - 21 nodes  vidal - 16 nodes

How do I get to them? Accounts - SSH client -

Quick Hardware Specs  SunFire X dual CPU, dual core  2.4GHz Opteron cores  8GB RAM

Filesystems  /home - big, shared  /scratch-net - big, shared  /scratch - local, not so big  Network filesystems on a NAS

Backups There aren’t (yet?) any, so be careful

Power shiraz and marroo have a UPS each backing head nodes and NAS vidal will be getting a UPS to back at least that also

What can I do? They’re Linux boxes, so anything you could normally do on your workstation. Plus a bit more.

Sun N1 Grid Engine Fairly simple setup of batch system, our implementation has only a single queue.

N1GE QuickStart Only on vidal sge-root is /home/N1GE6 Cell name is “default” Important commands are qsub and qstat

Commercial Software CFI grant has allowances for MatLab CPLEX is installed on vidal, arrange for licensing

Free Software Of course, there are lots of packages available for Linux. We have installed R on vidal, along with some packages.

Updates and Security Compute nodes as static as possible Head nodes receive security updates Head node reboots scheduled in advance

Other Stuff Ganglia for rough usage data and trends Head nodes monitored for connectivity Mailing list!

Hopefully this…

… leads to this: Fun in the Sun