TeraScale Supernova Initiative

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
1 Projection Indexes in HDF5 Rishi Rakesh Sinha The HDF Group.
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
UNCLASSIFIED: LA-UR Data Infrastructure for Massive Scientific Visualization and Analysis James Ahrens & Christopher Mitchell Los Alamos National.
Astro-DISC: Astronomy and cosmology applications of distributed super computing.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
October 16-18, Research Data Set Archives Steven Worley Scientific Computing Division Data Support Section.
Nuclear Physics Greenbook Presentation (Astro,Theory, Expt) Doug Olson, LBNL NUG Business Meeting 25 June 2004 Berkeley.
PhD course - Milan, March /09/ Some additional words about cloud computing Lionel Brunie National Institute of Applied Science (INSA) LIRIS.
Larry Marx and the Project Athena Team. Outline Project Athena Resources Models and Machine Usage Experiments Running Models Initial and Boundary Data.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
Introduction to Apache Hadoop Zibo Wang. Introduction  What is Apache Hadoop?  Apache Hadoop is a software framework which provides open source libraries.
OSG Area Coordinators’ Meeting LIGO Applications (NEW) Kent Blackburn Caltech / LIGO October 29 th,
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
So far we have covered … Basic visualization algorithms Parallel polygon rendering Occlusion culling They all indirectly or directly help understanding.
Presented by On the Path to Petascale: Top Challenges to Scientific Discovery Scott A. Klasky NCCS Scientific Computing End-to-End Task Lead.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
June 29 San FranciscoSciDAC 2005 Terascale Supernova Initiative Discovering New Dynamics of Core-Collapse Supernova Shock Waves John M. Blondin NC State.
CSE 451: Operating Systems Spring 2013 Module 26 Cloud Computing Ed Lazowska Allen Center 570 © 2013 Gribble, Lazowska, Levy,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
F. Douglas Swesty, DOE Office of Science Data Management Workshop, SLAC March Data Management Needs for Nuclear-Astrophysical Simulation at the Ultrascale.
Easy Deployment of the WRF Model on Heterogeneous PC Systems Braden Ward and Shing Yoh Union, New Jersey.
Diskless Checkpointing on Super-scale Architectures Applied to the Fast Fourier Transform Christian Engelmann, Al Geist Oak Ridge National Laboratory Februrary,
CSE 451: Operating Systems Autumn 2010 Module 25 Cloud Computing Ed Lazowska Allen Center 570.
TeraScale Supernova Initiative: A Networker’s Challenge 11 Institution, 21 Investigator, 34 Person, Interdisciplinary Effort.
Workshop on Parallelization of Coupled-Cluster Methods Panel 1: Parallel efficiency An incomplete list of thoughts Bert de Jong High Performance Software.
J.-N. Leboeuf V.K. Decyk R.E. Waltz J. Candy W. Dorland Z. Lin S. Parker Y. Chen W.M. Nevins B.I. Cohen A.M. Dimits D. Shumaker W.W. Lee S. Ethier J. Lewandowski.
Presented by Visualization at the Leadership Computing Facility Sean Ahern Scientific Computing Center for Computational Sciences.
NCAR RP Update Rich Loft NCAR RPPI May 7, NCAR Teragrid RP Developments Current Cyberinfrastructure –5.7 TFlops/2048 core Blue Gene/L system –100.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
CEPBA-Tools experiences with MRNet and Dyninst Judit Gimenez, German Llort, Harald Servat
Support to scientific research on seasonal-to-decadal climate and air quality modelling Pierre-Antoine Bretonnière Francesco Benincasa IC3-BSC - Spain.
Tackling I/O Issues 1 David Race 16 March 2010.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
January 9, 2007AAS/AAPT John M. Blondin NC State University Discovering the Complexity of Supernovae through 3D Simulations.
EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Giuseppe Andronico INFN Sez. CT & Consorzio COMETA Workshop Clouds.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Lyon Analysis Facility - status & evolution - Renaud Vernet.
VisIt Project Overview
A Brief Introduction to NERSC Resources and Allocations
Tools and Services Workshop
Jay Boisseau, Director Texas Advanced Computing Center
Joslynn Lee – Data Science Educator
ICOS on-demand atmospheric transport computation A use case for interoperability of EGI and EUDAT services Ute Karstens, André Bjärby, Oleg Mirzov, Roger.
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Experience of Lustre at a Tier-2 site
Tamas Szalay, Volker Springel, Gerard Lemson
Super Computing By RIsaj t r S3 ece, roll 50.
Windows Azure Migrating SQL Server Workloads
So far we have covered … Basic visualization algorithms
Hadoop Clusters Tess Fulkerson.
Grid Canada Testbed using HEP applications
BlueGene/L Supercomputer
SDM workshop Strawman report History and Progress and Goal.
WHY THE RULES ARE CHANGING FOR LARGE DATA VISUALIZATION AND ANALYSIS
MFE Simulation Data Management
German Astrophysical Virtual Observatory
Parallel Feature Identification and Elimination from a CFD Dataset
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

TeraScale Supernova Initiative Supernovae driven by the gravitational collapse of the core of a massive star March 16, 2004 DMW2004

3D runs could be routine if… …we could handle the data Linear Evolution 4003 grid 500 Gb of data 8 hour run Non-linear Evolution Up to 10003 grid Terabytes of data 30-50 hour run March 16, 2004 DMW2004

Science begins with Data Scientific discovery is done with interactive access to data Must have interactive access on a large-memory computer for analysis and visualization. Must have high bandwidth in accessing data. Must have sufficient storage (many TB) to hold data for weeks. Current solution: move data to a dedicated computer a 22-node linux cluster EnSight parallel software March 16, 2004 DMW2004

To move, or not to move… Working with data at remote site is not currently possible due to lack of: Shared file system Large # of interactive processors on-line storage for many TBs low-latency, high-bandwith WAN Cray X1 Billion-cell simulation in 30 hours generates 4 terabytes WAN Visualization platform March 16, 2004 DMW2004

Current mode of operation Logistical network ORNL NCSU Linux cluster with 4 Tbytes of local disks Cray X1 ~few MB/s 20 MB/s HDF5 Billion-cell simulation in 30 hours generates 4 terabytes HPSS Time=2.2 density pressure velocity B field March 16, 2004 DMW2004

Data Management Demands NOW! Volume of data: 100 TB (<5 TB/run) Bandwidth to storage: 100 MB/s WAN bandwidth: 100 MB/s LoCI for replication and collaboration Data integrity?? March 16, 2004 DMW2004

Data Management Demands Future? Volume of data: PB are not too far away Bandwidth to storage: Must match flops WAN bandwidth: Do we move a PB? Storage efficient access - easy to use! March 16, 2004 DMW2004