Louisiana State Grid Dick Greenwood Developments in the Louisiana State Grid (LONI) Dick Greenwood Louisiana Tech University DOSAR III Workshop at The.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Stork Data Scheduler: Current Status and Future Directions Sivakumar Kulasekaran.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Southwest Tier 2 Center Status Report U.S. ATLAS Tier 2 Workshop - Harvard Mark Sosebee for the SWT2 Center August 17, 2006.
DOSAR Workshop VII April 2, 2009 Louisiana Tech Site Report Michael S. Bryant Systems Manager, CAPS/Physics Louisiana Tech University
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
DOSAR Workshop V September 27, 2007 Michael Bryant Louisiana Tech University Louisiana Tech Site Report.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
SURA Regional HPC Grid Proposal Ed Seidel LSU With Barbara Kucera, Sara Graves, Henry Neeman, Otis Brown, others.
ISU DOSAR WORKSHOP Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University April 5, 2007.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Enabling Data Intensive Science with PetaShare Tevfik Kosar Center for Computation & Technology Louisiana State University April 6, 2007.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
LONI Overview State-wide IT initiative: $25M – Gov. Mike Foster, present LONI - $40M, Gov. Kathleen Blanco, LONI - $10M, Gov. Kathleen.
Louisiana State University SURA March 22, 2005 Charlie McMahon Director of Telecommunications Louisiana State University.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
DOSAR Workshop at Sao Paulo Dick Greenwood What’s Next for DOSAR? Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at the Sao Paulo, Brazil.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
IST Storage & Backup Group 2011 Jack Shnell Supervisor Joe Silva Senior Storage Administrator Dennis Leong.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
CCS Overview Rene Salmon Center for Computational Science.
INFSO-RI Enabling Grids for E-sciencE Hellas Grid infrastructure update Kostas Koumantaros, Christos Aposkitis EGEE-HellasGrid Coordination.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
ISU DOSAR WORKSHOP Dick Greenwood DOSAR/OSG Statement of Work (SoW) Dick Greenwood Louisiana Tech University April 5, 2007.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
The DCS lab. Computer infrastructure Peter Chochula.
News from Alberto et al. Fibers document separated from the rest of the computing resources
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at Sao Paulo September 16-17, 2005.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Florida Tier2 Site Report USCMS Tier2 Workshop Livingston, LA March 3, 2009 Presented by Yu Fu for the University of Florida Tier2 Team (Paul Avery, Bourilkov.
1 LONI (The Louisiana Optical Network Initiative) Tevfik Koşar Center for Computation and Technology Louisiana State University DOSAR Workshop, Arlington-TX.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University DOSAR II Workshop at UT-Arlington March 30-31, 2005.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
E2800 Marco Deveronico All Flash or Hybrid system
6th DOSAR Workshop University Mississippi Apr. 17 – 18, 2008
Southwest Tier 2 Center Status Report
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Southwest Tier 2.
Another Year, Another Petabyte
JDAT Production Hardware
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

Louisiana State Grid Dick Greenwood Developments in the Louisiana State Grid (LONI) Dick Greenwood Louisiana Tech University DOSAR III Workshop at The University of Oklahoma September 21-22, 2006

Louisiana State Grid Dick Greenwood 21SEP06 2 LONI Background In September 2004, the Louisiana State has committed $40M for a state-wide optical network. 40Gb/sec bandwith Spanning 6 Universities and 2 Health Centers: –LSU –Latech –UL-Lafayette –Tulane –UNO –Southern University –LSU Health Centers in New Orleans Shereveport

Louisiana State Grid Dick Greenwood 21SEP Plan

Louisiana State Grid Dick Greenwood Not an Ordinary Optical Network

Louisiana State Grid Dick Greenwood 21SEP06 5 Systems Deployed IBM p5 Systems – 1 February 2006 – LaTech [bluedawg] AIX v5.3 ● 17 users submitted 307 jobs – 15 March 2006 – Tulane [ducky] AIX v5.2 ● 43 users submitted 2946 jobs – 7 August 2006 – ULL [zeke] AIX v5.3 ● 9 users submitted 286 jobs – 8 September 2006 – UNO [neptune] AIX v5.3 ● 4 users submitted 10 jobs – ~October 2006 – SUBR [lacumba] AIX v5.3 ● 89 total LONI users

Louisiana State Grid Dick Greenwood 21SEP06 6 LONI Software Stack 100% TeraGrid compatible … Globus Toolkit Condor Virtual Data Toolkit (VDT) … The complete SW stack still to be finalized

Louisiana State Grid Dick Greenwood 21SEP06 7 New Dell Linux Clusters To Be Delivered “Remote” Systems – 132 nodes, four-way Intel “Woodcrest” 2.33 GHz, 4 GB RAM [4.921 TF; 528 GB RAM] – Shipment expected week of 25 September – First deployment will be LSU's system Next deployment likely LaTech Central System – 720 nodes, eight-way Intel “Cloverto(w)n” 2.33 GHz, 4 GB RAM [ TF; 2.88 TB RAM] – Shipment expected end of November or early December

Louisiana State Grid Dick Greenwood 21SEP06 8 Linux Clusters Woodcrest and Cloverto(w)n – Woodcrest is dual-core on single silicon with a shared 4 MB cache – Cloverto(w)n is two dual-core modules in a single module (may change to four cores on single silicon) – Both based on 65 nm technology initially then 45 nm – Both are capable of retiring four floating point operations per clock cycle, rather than two [IBM's POWER5 also does four floats per cycle]

Louisiana State Grid Dick Greenwood 21SEP06 9 Dell Cluster Description Environmentals: – 208 V / 310 amperes; 64.5 KW; 18 tons of cooling (7,868 cfm) – 6 racks total (4 node, 1 control, 1 storage) – Rack Dimensions: 78.7"Hx23.94"Wx 39.93"D – Each rack has 4 PDUs; 4 L v connects (total of 24 L6-30 circuits)

Louisiana State Grid Dick Greenwood 21SEP06 10 Storage Currently very tight: – /home is 25 GB and /scratch is 280 GB – this limits usability – all served via Network File System (NFS) which is not high performance ● Future: – When central Linux cluster comes, it will include: ● 14.0 TB raw at each “remote” site in one rack ● 225 TB raw at the central site – Will provide central /home storage as well as global /scratch space – Using Lustre filesystem supported by Cluster Filesystems, Inc.

Louisiana State Grid Dick Greenwood 21SEP06 11 Award of NSF MRI: PetaShare 1 x Overland Storage NEO Series 8000 tape storage system – 400 TB Capacity: $ 153,558 Specifications : Model 500, 12 Native Fibre LTO-3 Tape Drives, 500 Tape Slots, Redundant Power, Remote Access & Control, Touch Screen Front Panel (Quote includes shipping and assembly) Deployment site: Louisiana State University 5 x Overland Storage REO Series 9000 disk storage system - 44 TB capacity each: 5 x $96,144 = $480,720 Specifications : 44TB Raw SATA, 38 TB usable RAID 5, Protecton OS, Remote Access and Control, Rackmount (Quote includes shipping and assembly) Deployment sites: Louisiana State University, Louisiana Tech University, Tulane University, University of Louisiana at Lafayette, Tulane University, University of New Orleans, Total requested equipment cost: $634,278

Louisiana State Grid Dick Greenwood 21SEP06 12 Tevfik’s Schedule for PetaShare Year 1: The required equipment for the development of PetaShare will be purchased, installed at each participating site, calibrated, and will be integrated with the existing equipment at these sites. In parallel to this process, we will start developing the key technologies necessary for this instrumentation: data-aware storage, data- aware schedulers, and remote data analysis and visualization. Year 2: Technologies developed during the first year will be integrated. Transparent storage selection, data archival, cataloging and remote access techniques will be developed on top of this integrated system. A user friendly and uniform interface will be developed for interaction. Year 3: The first prototypes will be deployed at the participating sites. Testing and further development will be performed. Application groups will start using the new instrumentation actively. Year 4: The developed system will be ported to other platforms. It will be made publicly available to the community. Commercialization potential will be investigated. We are currently in the process of negotiating with different vendors. The purchase of the equipment may happen sometime in late Decemebr or early January.

Louisiana State Grid Dick Greenwood 21SEP06 13 THE END

Louisiana State Grid Dick Greenwood 21SEP06 14 EXTRA SLIDES

Louisiana State Grid Dick Greenwood 21SEP06 15

Louisiana State Grid Dick Greenwood 21SEP06 16 Coastal Modeling Hurricane Track Prediction Storm Surge Modeling Predicting Wind Effects Coastal Erosion Modeling Emergency Response

Louisiana State Grid Dick Greenwood 21SEP06 17 SCOOP Project SCOOP PortalPortal 17

Louisiana State Grid Dick Greenwood 21SEP06 18 Other Applications Numerical Relativity Petroleum Engineering Computational Fluid Dynamics (CFD) High Energy Physics Bioinformatics

Louisiana State Grid Dick Greenwood 21SEP06 19 Ramah LONI DWDM Landry Alexandria LONI POP ULL OTM POP Coushatta LONI DWDM Derry 55Km 80Km 89Km LSU Hammond LONI POP UNO Tulane LSU HSC NO LONI POP OTM POP 86Km 48Km 54Km 75Km 20Km 70Km Southern DWDM Loop Monroe LONI POP La Tech LONI POP 82Km 55Km 82Km 72Km 61Km 82Km Northern DWDM Loop Southern LONI POP Minden LONI DWDM 38Km LSU HSC Shreveport LONI POP OTM POP NLR BR LONI POP LSU Jackson, MS LONI DWDM Franklin Schriever Port Barre LONI DWDM 13Km 102Km 42Km LaPlace 65Km Crowley Lake Charles LONI POP 65Km 69Km 11Km 13Km 39Km LPB 11Km LSU Seminary LONI DWDM Greensburg Tylertown Mendenhall Jackson Tallulah Start Edwards Roanoke 35Km 38Km 46Km 35Km 38Km 24Km 23Km 64Km 56Km48Km LONI Topology – POPs & DWDMs

Louisiana State Grid Dick Greenwood 21SEP06 20 Ramah LONI DWDM Landry Alexandria LONI POP ULL OTM POP Coushatta LONI DWDM Derry 55Km 80Km 89Km LSU Hammond LONI POP UNO Tulane LSU HSC NO LONI POP OTM POP 86Km 48Km 54Km 75Km 20Km 70Km Southern DWDM Loop Monroe LONI POP La Tech LONI POP 82Km 55Km 82Km 72Km 61Km 82Km Northern DWDM Loop Southern LONI POP Minden LONI DWDM 38Km LSU HSC Shreveport LONI POP OTM POP NLR BR LONI POP LSU Jackson, MS LONI DWDM Franklin Schriever Port Barre LONI DWDM 13Km 102Km 42Km LaPlace 65Km Crowley Lake Charles LONI POP 65Km 69Km 11Km 13Km 39Km LPB 11Km LSU Seminary LONI DWDM Greensburg Tylertown Mendenhall Jackson Tallulah Start Edwards Roanoke 35Km 38Km 46Km 35Km 38Km 24Km 23Km 64Km 56Km48Km LONI Topology - Status

Louisiana State Grid Dick Greenwood 21SEP06 21 LONI Computing Power At each 6 universities: –IBM p5-575 system(112 power proc, 224 GB RAM) At LSU: –SuperMike (1024 Xeon proc, 1 TB RAM) –SuperHelix (256 Xeon proc, 256 GB RAM) –MiniMike (32 Xeon proc, 32 GB RAM) –SGI Prism (32 proc, 128 GB RAM) –Nemeaux (64 Mac processors) –LSU Campus Grid with ~2000 processors (soon) At UL Lafayette –LITE (384 proc, 384 GB RAM) At Latech – CAPS (30 processors) And several others..