Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.

Slides:



Advertisements
Similar presentations
Building Portals to access Grid Middleware National Technical University of Athens Konstantinos Dolkas, On behalf of Andreas Menychtas.
Advertisements

Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
Copyright GeneGo CONFIDENTIAL »« MetaCore TM (System requirements and installation) Systems Biology for Drug Discovery.
NSF Site Visit HYDRA Using Windows Desktop Systems in Distributed Parallel Computing.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
HPC at IISER Pune Neet Deo System Administrator
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
London April 2005 London April 2005 Creating Eyeblaster Ads The Rich Media Platform The Rich Media Platform Eyeblaster.
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
John Kewley e-Science Centre CCLRC Daresbury Laboratory 28 th June nd European Condor Week Milano Heterogeneous Pools John Kewley
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
Distributed Monte Carlo Instrument Simulations at ISIS Tom Griffin, ISIS Facility & University of Manchester.
Grid Computing I CONDOR.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
China Grid Activity on SIG Presented by Guoqing Li At WGISS-21, Budapest 8 May, 2006.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Sep 21, 20101/14 LSST Simulations on OSG Sep 21, 2010 Gabriele Garzoglio for the OSG Task Force on LSST Computing Division, Fermilab Overview OSG Engagement.
HYDRA: Using Windows Desktop Systems in Distributed Parallel Computing Arvind Gopu, Douglas Grover, David Hart, Richard Repasky, Joseph Rinkovsky, Steve.
Grid MP at ISIS Tom Griffin, ISIS Facility. Introduction About ISIS Why Grid MP? About Grid MP Examples The future.
DB Installation and Care Carmine Cioffi Database Administrator and Developer ICAT Developer Workshop, The Cosener's House August
HYDRA: Using Windows Desktop Systems in Distributed Parallel Computing Arvind Gopu, Douglas Grover, David Hart, Richard Repasky, Joseph Rinkovsky, Steve.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
1 Media Grid Initiative By A/Prof. Bu-Sung Lee, Francis Nanyang Technological University.
BES III Computing at The University of Minnesota Dr. Alexander Scott.
Introducing Software Computer Concepts Unit A. Introducing Software What is an Operating System? OS is the master controller for all the activities that.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
Having a Blast! on DiaGrid Carol Song Rosen Center for Advanced Computing December 9, 2011.
Open Science Grid OSG Engagement Strategy and Status ETP Conference Call Oct ; 5:30PM EST Bringing additional non-physicists onto OSG John McGee.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
Rendering with the TeraDRE Laura Arns, David Braun, John Moreland Purdue University
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction TO Network Administration
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
CSS497 Undergraduate Research Performance Comparison Among Agent Teamwork, Globus and Condor By Timothy Chuang Advisor: Professor Munehiro Fukuda.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Background Computer System Architectures Computer System Software.
Computational Sciences at Indiana University an Overview Rob Quick IU Research Technologies HTC Manager.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
SCI-BUS project Pre-kick-off meeting University of Westminster Centre for Parallel Computing Tamas Kiss, Stephen Winter, Gabor.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Centre for Parallel Computing Tamas Kiss Centre for Parallel Computing A Distributed Rendering Service Tamas Kiss Centre for Parallel Computing Research.
1 Policy Based Systems Management with Puppet Sean Dague
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Status of BESIII Distributed Computing
Creating a Renderfarm Using Grid Technology
Belle II Physics Analysis Center at TIFR
Cluster / Grid Status Update
NGS Oracle Service.
Printer Admin Print Job Manager
Haiyan Meng and Douglas Thain
Semiconductor Manufacturing (and other stuff) with Condor
ALICE-Grid Activities in Bologna
Advanced Computing Facility Introduction
Footer.
Classifying & evaluating computers
Presentation transcript:

Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University Purdue University

The Problem: Rendering Animations typically contain thousands of frames, usually frames per second Each frame can be computationally complex, and can take hours to render Datasets can be moderate in size, results are usually small per job (< 10 MB) Each frame can be considered an individual computation, thus it plays well to serialized processing

Student Rendering Challenges No dedicated resource Rendering software may require non-HPC friendly resources Students work from a variety of platforms, MAC, Windows, Linux, SGI Students typically are not HPC tech aware

Indiana University Render Portal Find a suitably large/powerful compute resource for undergraduate student use Find adequate storage Create an interface that is: – Easy – Intuitive – Expandable – Upgradeable – Powerful

Computing Resources Available at Indiana University Quarry – IBM HS21 blade server supercomputer – 112 dual Intel Xeon 5335 quad-core processors – Linux OS Big Red – JS21 blade server supercomputer – 512 dual dual-core PowerPC 970MP – Linux OS AVIDD - distributed cluster computer – 192 Dual Intel P4 – Linux OS Student Technology Center (STC) workstations (Condor Pool) – ~2500 Windows based systems for student labs, general computing

STC Workstation Condor Pool Systems 3 Year life cycle replacement – 512 MB memory minimum – GHz system speed – 100 Mb – 1000 Mb interconnect – Windows based homogeneous software setup Funded through student fees ONLY

IU System Comparisons

Storage Future migration to Data Capacitor – Lustre file system – 500 TB – Auto cleans/purges – Mounted on Condor Master node, so no direct access

Render Portal Interface

Render Portal Overview

System Modularity Render packages are sent with each job run – No software is installed on workstations – New packages and updates can be added quickly – Old versions of software can be used

Render Portal Interface

File Upload and Selection

Submit Portlet

Job Submit and Monitor

Job Monitoring The job is submitted to the Condor pool, using the files created by the scripts. A process is created for each render job, that will monitor the progress and update a SQL database with: – Job information (Render package, scene, user) – Frames finished – Average times – Current status

Job Queue

Finishing the Job When a job is finished, the monitor process will: – Check if the job completed successfully – Create a preview flash movie of the completed images – Create a zip file of completed images – Send notification to the user with the status of the job – Notify the administrator if there was a problem

Detail Job Information and Preview

Condor Usage Statistics

Performance Evaluation 300 Frame Blender animation – Dual CPU workstation: 21.5 minutes (107 Hours) – BigRed Super computer: 95 minutes – Condor cluster: 173 minutes 504 Frame Maya animation – Dual CPU workstation 5 minutes (42 Hours) – Condor cluster: 68 minutes

Typical Render Jobs

Credits Marlon Pierce – IU Yu (Marie) Ma – IU Craig Stewart – IU Margaret Dolinsky and her students at IU Albert William - IUPUI NSF – Grant