Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.

Slides:



Advertisements
Similar presentations
Virtualization, Cloud Computing, and TeraGrid Kate Keahey (University of Chicago, ANL) Marlon Pierce (Indiana University)
Advertisements

Sponsors and Acknowledgments This work is supported in part by the National Science Foundation under Grants No. OCI , IIP and CNS
Science Clouds: Early Experiences in Cloud Computing for Scientific Applications Chicago, October 2008 Kate Keahey, Renato Figueiredo, Jose Fortes, Tim.
Nimbus or an Open Source Cloud Platform or the Best Open Source EC2 No Money Can Buy ;-) Kate Keahey Tim Freeman University of Chicago.
Education and training on FutureGrig Salt Lake City, Utah July 18 th 2011 Presented by Renato Figueiredo
SLA-Oriented Resource Provisioning for Cloud Computing
Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
Cloud Computing Resource provisioning Keke Chen. Outline  For Web applications statistical Learning and automatic control for datacenters  For data.
Advanced Computing and Information Systems laboratory Educational Virtual Clusters for On- demand MPI/Hadoop/Condor in FutureGrid Renato Figueiredo Panoat.
Deliver your Technology-Based Labs with VMware Lab Manager 5/6/2010 Michael Fudge.
© UC Regents 2010 Extending Rocks Clusters into Amazon EC2 Using Condor Philip Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer.
A quick introduction to CamGrid University Computing Service Mark Calleja.
CPS216: Advanced Database Systems (Data-intensive Computing Systems) How MapReduce Works (in Hadoop) Shivnath Babu.
PRESTON SMITH ROSEN CENTER FOR ADVANCED COMPUTING PURDUE UNIVERSITY A Cost-Benefit Analysis of a Campus Computing Grid Condor Week 2011.
Creating Clusters in a Virtual Environment Purpose: To create a development environment with limited hardware resources that allows the testing of parallel.
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Condor Project Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
TG RoundTable, Purdue RP Update October 11, 2008 Carol Song Purdue RP PI Rosen Center for Advanced Computing.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Computing at COSM by Lawrence Sorrillo COSM Center.
HTPC - High Throughput Parallel Computing (on the OSG) Dan Fraser, UChicago OSG Production Coordinator Horst Severini, OU (Greg Thain, Uwisc) OU Supercomputing.
High Throughput Parallel Computing (HTPC) Dan Fraser, UChicago Greg Thain, Uwisc.
CERN IT Department CH-1211 Genève 23 Switzerland t Virtualization with Windows at CERN Juraj Sucik, Emmanuel Ormancey Internet Services Group.
Purdue RP Highlights TeraGrid Round Table September 23, 2010 Carol Song Purdue TeraGrid RP PI Rosen Center for Advanced Computing Purdue University.
© Spinnaker Labs, Inc. Google Cluster Computing Faculty Training Workshop Open Source Tools for Teaching.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Microsoft Azure Virtual Machines. Networking Compute Storage Virtual Machine Operating System Applications Data & Access Runtime Provision & Manage.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Software Architecture
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 2.
Condor to Every Corner of Campus Condor Week 2010 Preston Smith Purdue University.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
High Throughput Parallel Computing (HTPC) Dan Fraser, UChicago Greg Thain, UWisc Condor Week April 13, 2010.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
FutureGrid Cyberinfrastructure for Computational Research.
Grid and Cloud Computing Globus Provision Dr. Guy Tel-Zur.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
The impacts of climate change on global hydrology and water resources Simon Gosling and Nigel Arnell, Walker Institute for Climate System Research, University.
Efficient Live Checkpointing Mechanisms for computation and memory-intensive VMs in a data center Kasidit Chanchio Vasabilab Dept of Computer Science,
Holding slide prior to starting show. Applications WG Jonathan Giddy
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
HPC University Requirements Analysis Team Training Analysis Summary Meeting at PSC September Mary Ann Leung, Ph.D.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Nimbus Update March 2010 OSG All Hands Meeting Kate Keahey Nimbus Project University of Chicago Argonne National Laboratory.
Purdue RP Highlights TeraGrid Round Table November 5, 2009 Carol Song Purdue TeraGrid RP PI Rosen Center for Advanced Computing Purdue University.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Canadian Bioinformatics Workshops
Creating Clusters in a Virtual Environment
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Matt Lemons Nate Mayotte
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
Статус ГРИД-кластера ИЯФ СО РАН.
Cloud Computing with Nimbus
Haiyan Meng and Douglas Thain
Sky Computing on FutureGrid and Grid’5000
Virtualization, Cloud Computing, and TeraGrid
Sky Computing on FutureGrid and Grid’5000
Introduction to research computing using Condor
Presentation transcript:

Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University

Updates on Purdue Condor Pool Purdue Condor resource now in excess of 30,000 cores Recent active users: –Fixation Tendencies of the H3N2 Influenza Virus –N-body simulations: Planets to Cosmology –De Novo RNA Structures with Experimental Validation –Planet-planet scattering in planetesimal disks –Robetta Gateway TeraGrid Round Table, 5/20/2010

New Developments in Condor Pool Virtual Machine “Universe” TeraGrid Round Table, 5/20/2010 Running on student Windows labs today – with VMWare Integrating now: KVM and libVirt on cluster (Steele) nodes

Condor VM Use Cases “VMGlide” –Using Condor to submit, transfer, and boot Linux VMs as cluster nodes on Windows systems More usable to the end-user! –Tested and demonstrated on the order of ~800 VMs over a weekend All running real user jobs inside of the VM container –Working with Condor team to minimize the network impact of transferring hundreds of VM images over the network TeraGrid Round Table, 5/20/2010

Condor VM use cases User-submitted virtual machines –For example: User has a code written in Visual Basic, that runs for weeks at a time on his PC Submitting to Windows Condor is an option, but the long runtime coupled with an inability to checkpoint limits its utility Solution: –Submit the entire Windows PC as a VM universe job – which will be suspended, checkpointed, and moved to a new machine until execution completes TeraGrid Round Table, 5/20/2010

Condor and Power In the economic climate of 2010, Purdue, like many institutions is looking to save power costs The campus Condor grid will help! –By installing Condor on machines around campus we will Get useful computation out of the powered-on machines And if there’s no work to be done? –Condor can hibernate the machines and wake them when there is work waiting TeraGrid Round Table, 5/20/2010

Cloud Computing: Wispy Purdue staff operating experimental cloud resource –Built with Nimbus from UC –Current Specs 32 nodes (128 cores): –16 GB RAM –4 cores per node –Public IP space for VM guests TeraGrid Round Table, 5/20/2010

The Workspace Service TeraGrid Round Table 5/20/2010 Slide borrowed from Kate Keahey:

Wispy – Use Cases Used in Virtual Clusters –Publications using Purdue’s Wispy cited below NEES project exploring using Wispy to provision on- demand clusters for quick turn-around of wide parallel jobs Working with faculty at Marquette Univ. to use Wispy in Fall 2010 course to teach cloud computing concepts With OSG team, using Wispy (and Steele) to run VMs for STAR project TeraGrid Round Table, 5/20/2010 “CloudBLAST: Combining MapReduce and Virtualization on Distributed Resources for Bioinformatics Applications” by A. Matsunaga, M. Tsugawa and J. Fortes. eScience “Sky Computing”, by K. Keahey, A. Matsunaga, M. Tsugawa, J. Fortes, to appear in IEEE Internet Computing, September 2009

HTPC High-Throughput Parallel Computing –With OSG and Wisconsin, using Steele to submit ensembles of single-node parallel jobs Package jobs with a parallel library (MPI, OpenMP, etc) Submit to many OSG sites as well as TG –Who’s using? Chemistry – over 300,000 hours used in Jan –HTPC allowed 9 papers in 10 months to be written! TeraGrid Round Table, 5/20/2010

Storage DC-WAN mounted and used at Purdue –Working on Lustre lnet routers to reach compute nodes Distributed Replication Service –Sharing spinning disk to DRS today –Investigating integration with Hadoop Filesystem (HDFS) NEES project investigating using DRS to archive data TeraGrid Round Table, 5/20/2010