AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.

Slides:



Advertisements
Similar presentations
Rhea Analysis & Post-processing Cluster Robert D. French NCCS User Assistance.
Advertisements

DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Stork Data Scheduler: Current Status and Future Directions Sivakumar Kulasekaran.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
DOSAR Workshop VII April 2, 2009 Louisiana Tech Site Report Michael S. Bryant Systems Manager, CAPS/Physics Louisiana Tech University
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
Scientific Computing Laboratory I NSTITUTE OF P HYSICS B ELGRADE WWW. SCL. RS.
Supporting Transformative Research Through Regional Cyberinfrastructure (CI) Dr. Dali Wang, Grid Infrastructure Specialist.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
A Workflow Approach to Designed Reservoir Study Presented by Zhou Lei Center for Computation and Technology Louisiana State University June 25, 2007.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
About the EnLightened Computing Project Prepared for Grid Computing Class at UNCW and UNCC Carla S. Hunt Senior Solutions Architect MCNC April 12, 2007.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
SURA Coastal Ocean Observing and Prediction (SCOOP) Program Philip Bogden CEO, GoMOOS Director, SCOOP Program at SURA Southeastern Universities Research.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Building a Hive for Queen Bee Randall Walker, Division of Administration.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
UCoMS: Grid Computing Framework for Petroleum Engineering.
Communicating with Users about HTCondor and High Throughput Computing Lauren Michael, Research Computing Facilitator HTCondor Week 2015.
DOSAR Workshop V September 27, 2007 Michael Bryant Louisiana Tech University Louisiana Tech Site Report.
Distributed Real-Time Systems for the Intelligent Power Grid Prof. Vincenzo Liberatore.
December 8 & 9, 2005, Austin, TX SURA Cyberinfrastructure Workshop Series: Grid Technology: The Rough Guide Configuring Resources for the Grid Jerry Perez.
and beyond Office of Vice President for Information Technology.
Cactus Computational Frameowork Freely available, modular, environment for collaboratively developing parallel, high- performance multi-dimensional simulations.
SURA Regional HPC Grid Proposal Ed Seidel LSU With Barbara Kucera, Sara Graves, Henry Neeman, Otis Brown, others.
ISU DOSAR WORKSHOP Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University April 5, 2007.
Gayathri Namala Center for Computation & Technology Louisiana State University Representing the SURA Coastal Ocean Observing and Prediction Program (SCOOP)
Enabling Data Intensive Science with PetaShare Tevfik Kosar Center for Computation & Technology Louisiana State University April 6, 2007.
Cyberinfrastructure for Distributed Rapid Response to National Emergencies Henry Neeman, Director Horst Severini, Associate Director OU Supercomputing.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
The Sharing and Training of HPC Resources at the University of Arkansas Amy Apon, Ph.D. Oklahoma Supercomputing Symposium October 4, 2006.
LONI Overview State-wide IT initiative: $25M – Gov. Mike Foster, present LONI - $40M, Gov. Kathleen Blanco, LONI - $10M, Gov. Kathleen.
Scientific Workflow Scheduling in Computational Grids Report: Wei-Cheng Lee 8th Grid Computing Conference IEEE 2007 – Planning, Reservation,
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
INUNDATION TESTBED PI MEETING: LSU FVCOM Progress Chunyan Li (with ACKNOWLEDGEMENT to UMASS Team and Dr. Zheng) Louisiana State University 1.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Louisiana State Grid Dick Greenwood Developments in the Louisiana State Grid (LONI) Dick Greenwood Louisiana Tech University DOSAR III Workshop at The.
CCS Overview Rene Salmon Center for Computational Science.
Renaissance Computing Institute: An Overview Lavanya Ramakrishnan, John McGee, Alan Blatecky, Daniel A. Reed Renaissance Computing Institute.
“Grids and eScience” Mark Hayes Technical Director - Cambridge eScience Centre GEFD Summer School 2003.
TeraGrid Advanced Scheduling Tools Warren Smith Texas Advanced Computing Center wsmith at tacc.utexas.edu.
Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison Managing and Scheduling Data.
STORK: Making Data Placement a First Class Citizen in the Grid Tevfik Kosar University of Wisconsin-Madison May 25 th, 2004 CERN.
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at Sao Paulo September 16-17, 2005.
Launching a great program! SURAgrid All Hands Meeting – September 21, 2006.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
LIGO Plans for OSG J. Kent Blackburn LIGO Laboratory California Institute of Technology Open Science Grid Technical Meeting UCSD December 15-17, 2004.
1 LONI (The Louisiana Optical Network Initiative) Tevfik Koşar Center for Computation and Technology Louisiana State University DOSAR Workshop, Arlington-TX.
Bulk Data Transfer Activities We regard data transfers as “first class citizens,” just like computational jobs. We have transferred ~3 TB of DPOSS data.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Fall 2006 I2 Member Meeting Global Control of Research Networks Gigi Karmous-Edwards International task Force panel.
Reliable and Efficient Grid Data Placement using Stork and DiskRouter Tevfik Kosar University of Wisconsin-Madison April 15 th, 2004.
Run-time Adaptation of Grid Data Placement Jobs George Kola, Tevfik Kosar and Miron Livny Condor Project, University of Wisconsin.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University DOSAR II Workshop at UT-Arlington March 30-31, 2005.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
High Energy Physics at the OU Supercomputing Center for Education & Research Henry Neeman, Director OU Supercomputing Center for Education & Research University.
SBS Alert Web Console Senior Design 3 – February 28, 2005 Debra Sweet Barrett.
Brief introduction about “Grid at LNS”
Status of WLCG FCPPL project
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
CLUSTER COMPUTING.
Presentation transcript:

AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana State University April 30, 2008

HPC Resources in Louisiana Need for Condor? Reservoir Simulations Coastal Modeling Conclusions Roadmap

40Gb/sec bandwidth state-wide Next-generation network for research Connected to the National LambdaRail (NLR, 10Gb/sec) in Baton Rouge Spans 6 universities and 2 health centers The Louisiana Optical Network Initiative (LONI) is a high speed computing and networking resource supporting scientific research and the development of new technologies, protocols, and applications to positively impact higher education and economic development in Louisiana. Louisiana Optical Network Initiative -

1 x Dell 50 TF Intel Linux cluster housed at the state's Information Systems Building (ISB) ▫ “Queen Bee” named after Governor Kathleen Blanco who pledged $40 million over ten years for the development and support of LONI. ▫ 680 nodes (5,440 CPUs), 688 GB RAM  Two quad-core 2.33 GHz Intel Xeon 64-bit processors  8 GB RAM per node ▫ Measured 50.7 TF peak performance ▫ According to the June, 2007 Top500 listing*, Queen Bee ranked the 23rd fastest supercomputer in the world. 6 x Dell 5 TF Intel Linux clusters housed at 6 LONI member institutions ▫ 128 nodes (512 CPUs), 512 GB RAM  Two dual-core 2.33 GHz Xeon 64-bit processors  4 GB RAM per node ▫ Measured TF peak performance 5 x IBM Power5 575 AIX clusters housed at 5 LONI member institutions ▫ 13 nodes (104 CPUs), 224 GB RAM  Eight 1.9 GHz IBM Power5 processors  16 GB RAM per node ▫ Measured TF peak performance LONI Computing Resources *  Combined total of 84 Teraflops

National Lambda Rail Louisiana Optical Network IBM P5 Supercomputers LONI Members Dell 80 TF Cluster LONI: The big picture… by Chris Womack

Who would say NO to more free cycles? Condor is more than cycle-stealing Condor project for us: Batch Scheduler (Condor) Gateway to Grid (Condor-G) Grid Software Stack (DAGMan, NeST, Stork..) Open Source (do your own thing) Why Condor?

UCoMS Goals: Reservoir simulation and uncertainty analysis 26M simulations, each generating 50MB of data --> 1.3 PB of data total Drilling processing and real-time monitoring is data-intensive as well --> real-time visualization and analysis of TB’s of streaming data Ubiquitous Computing and Monitoring System for Discovery and Managment of Energy Resources

UCoMS Abstract Workflow

UCoMS Concrete Workflow

10 Putting Together 10

Monitoring DAGs via WEB UCoMS Closed Loop Demonstration -- SC07

Monitoring DAGs via WEB

13 Monitoring DAGs via WEB

14 SCOOP Goals: Execution of atmospheric (WindGen) and hydrodynamic models (WW3, ADCIRC) for predicting effects of a storm (i.e. storm surge). 32 tracks per Storm every six hours Each track may have different priority Issues related to data and workflow management, resource discovery and brokering, task farming SURA Coastal Ocean Observing and Prediction Program SURA: Southeastern Universities Research Association SURA Coastal Ocean Observing and Prediction Program SURA: Southeastern Universities Research Association

SCOOP Scheduling Scheduling issues: – Dynamic prioritization based on scenario and resources – Three queues: on-demand (preemptive), priority, and best effort - co-scheduling, advanced reservation

16 Best Effort Scheduling 16

17 On-demand Scheduling 17

18 Conclusions Condor is more than a cycle-stealing tool We have used Condor, DAGMan and Stork successfully in end-to-end processing of Coastal Modeling Reservoir Simulations Willing to share exerience Visually monitoring DAG progress New Stork release soon 18 Hmm..