University of Chicago Department of Energy Applications In Hand:  FLASH (HDF-5)  ENZO (MPI-IO)  STAR Likely  Climate – Bill G to contact (Michalakas.

Slides:



Advertisements
Similar presentations
Open MPI MPI_File_* API functions, constants, handles ROMIO plugin All the other MPI_* API functions, constants, handles How it normally looks (v1.6)
Advertisements

Earth System Curator Spanning the Gap Between Models and Datasets.
University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
Phillip Dickens, Department of Computer Science, University of Maine. In collaboration with Jeremy Logan, Postdoctoral Research Associate, ORNL. Improving.
SWIM Meeting, Tech-X, October 2010 The Fusion Simulation Program presented at the SWIM meeting on October 28, 2010 JR Cary Tech-X on behalf of the FSP.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Deployment, Deployment, Deployment March, 2002 Randy Burris Center for Computational Sciences.
The Triumph of Hope over Experience * ? Bill Gropp *Samuel Johnson.
SDM center Questions – Dave Nelson What kind of processing / queries / searches biologists do over microarray data? –Range query on a spot? –Range query.
The DOE Science Grid Computing and Data Infrastructure for Large-Scale Science William Johnston, Lawrence Berkeley National Lab Ray Bair, Pacific Northwest.
Grid IO APIs William Gropp Mathematics and Computer Science Division.
7 - 1 Copyright © 2006, The McGraw-Hill Companies, Inc. All rights reserved.
Alok 1Northwestern University Access Patterns, Metadata, and Performance Alok Choudhary and Wei-Keng Liao Department of ECE,
Integrating Visualization Peripherals into Power-Walls and Similar Tiled Display Environments James Da Cunha Savannah State University Research Alliance.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Project 3.3 Optimizing Shared Access to Tertiary Storage March, 2002 Presenter - Randy Burris.
SDM meeting, July 10-11, 2001Area 3 Report Data mining and discovery of access patterns 3a.i) Adaptive file caching in a distributed system (LBNL) 3b.i)
HDF5 A new file format & software for high performance scientific data management.
1 Scientific Data Management Center DOE Laboratories: ANL: Rob Ross LBNL:Doron Rotem LLNL:Chandrika Kamath ORNL: Nagiza Samatova.
Pursuing Faster I/O in COSMO POMPA Workshop May 3rd 2010.
1 Use of SRMs in Earth System Grid Arie Shoshani Alex Sim Lawrence Berkeley National Laboratory.
1 Arie Shoshani, LBNL SDM center Scientific Data Management Center(SDM-ISIC) Arie Shoshani Computing Sciences Directorate Lawrence Berkeley National Laboratory.
DORII Joint Research Activities DORII Joint Research Activities Status and Progress 6 th All-Hands-Meeting (AHM) Alexey Cheptsov on.
1 Parallel and Grid I/O Infrastructure Rob Ross, Argonne National Lab Parallel Disk Access and Grid I/O (P4) SDM All Hands Meeting March 26, 2002.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
SciDAC All Hands Meeting, March 2-3, 2005 Northwestern University PIs:Alok Choudhary, Wei-keng Liao Graduate Students:Avery Ching, Kenin Coloma, Jianwei.
1 Project Goals Project Elements Future Plans Scheduled Accomplishments Project Title: Net Developing Network-Aware Operating Systems PI: G. Huntoon,
UDT as an Alternative Transport Protocol for GridFTP Raj Kettimuthu Argonne National Laboratory The University of Chicago.
Opportunities in Parallel I/O for Scientific Data Management Rajeev Thakur and Rob Ross Mathematics and Computer Science Division Argonne National Laboratory.
MANAGEMENT Chapter Four. 4-2 Management Process In order to achieve results, public relations practitioners must have access to management and build relationships.
Parallel and Grid I/O Infrastructure W. Gropp, R. Ross, R. Thakur Argonne National Lab A. Choudhary, W. Liao Northwestern University G. Abdulla, T. Eliassi-Rad.
Project 4 : SciDAC All Hands Meeting, September 11-13, 2002 A. Choudhary, W. LiaoW. Gropp, R. Ross, R. Thakur Northwestern UniversityArgonne National Lab.
Intergrid KoM Santander 22 june, 2006 E-Infraestructure shared between Europe and Latin America José Manuel Gutiérrez
Towards Exascale File I/O Yutaka Ishikawa University of Tokyo, Japan 2009/05/21.
1 Arie Shoshani, LBNL SDM center Scientific Data Management Center (Integrated Software Infrastructure Center – ISIC) Arie Shoshani All Hands Meeting March.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
Defining and Managing Internationalisation in Higher Education Institutions.
CCGrid 2014 Improving I/O Throughput of Scientific Applications using Transparent Parallel Compression Tekin Bicer, Jian Yin and Gagan Agrawal Ohio State.
The Earth System Grid: A Visualisation Solution Gary Strand.
1 Scientific Data Management Center(ISIC) contains extensive publication list.
Web Portal Design Workshop, Boulder (CO), Jan 2003 Luca Cinquini (NCAR, ESG) The ESG and NCAR Web Portals Luca Cinquini NCAR, ESG Outline: 1.ESG Data Services.
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
Presented by Scientific Data Management Center Nagiza F. Samatova Network and Cluster Computing Computer Sciences and Mathematics Division.
NET100 … as seen from ORNL Tom Dunigan November 8, 2001.
School ICT Learning programmes Module 3 Workshop 3.
ESG Observational Data Integration Presented by Feiyi Wang Technology Integration Group National Center of Computational Sciences.
F. Douglas Swesty, DOE Office of Science Data Management Workshop, SLAC March Data Management Needs for Nuclear-Astrophysical Simulation at the Ultrascale.
Best Practices Teaching Yiddish Richard Alan Korb Columbia University.
Presented by Scientific Data Management Center Nagiza F. Samatova Oak Ridge National Laboratory Arie Shoshani (PI) Lawrence Berkeley National Laboratory.
Open Agent Based Modeling Consortium Initial Charter and Action Plan.
Supercomputing 2006 Scientific Data Management Center Lead Institution: LBNL; PI: Arie Shoshani Laboratories: ANL, ORNL, LBNL, LLNL, PNNL Universities:
Parallel NetCDF Rob Latham Mathematics and Computer Science Division Argonne National Laboratory
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Wanda R. Ferrell, Ph.D. Acting Director Climate and Environmental Sciences Division February 24, 2010 BERAC Meeting Atmospheric System Research Science.
NET100 Development of network-aware operating systems Tom Dunigan
SDM Center High-Performance Parallel I/O Libraries (PI) Alok Choudhary, (Co-I) Wei-Keng Liao Northwestern University In Collaboration with the SEA Group.
SDM Center Parallel I/O Storage Efficient Access Team.
Safety Culture Improvement Doug Fleming Update to CAC January 26, 2012.
March 4, 2003SOS-71 FAST-OS Arthur B. (Barney) Maccabe Computer Science Department The University of New Mexico SOS 7 Durango, Colorado March 4, 2003.
UT-BATTELLE U.S. Department of Energy Oak Ridge National Laboratory Net100 year 1 leftovers (proposal): PSC –none ORNL –router access to SNMP data (besides.
SDN-SF LANL Tasks. LANL Research Tasks Explore parallel file system networking (e.g. LNet peer credits) in order to give preferential treatment to isolated.
A. Sim, CRD, L B N L 1 Production Data Management Workshop, Mar. 3, 2009 BeStMan and Xrootd Alex Sim Scientific Data Management Research Group Computational.
The Earth System Grid: A Visualisation Solution
Collaborations and Interactions with other Projects
Fabric and Storage Management
WP7 objectives, achievements and plans
Scientific Data Management contains extensive publication list
SDM workshop Strawman report History and Progress and Goal.
TeraScale Supernova Initiative
P SET-Up: Sit and organize materials. HPT
Presentation transcript:

University of Chicago Department of Energy Applications In Hand:  FLASH (HDF-5)  ENZO (MPI-IO)  STAR Likely  Climate – Bill G to contact (Michalakas (NetCDF?), Drake (??))  Supernovae – Randy to contact (ORNL contact is Ross Toedte

University of Chicago Department of Energy Task Cooperation Testbed access  Probe (ORNL) and ChibaCity (ANL)  Look into making some compute cycles available as well to attract applications WAN Communication (TCP mods/replacements)  ANL, ORNL MPI-IO over HPSS  ANL, NWU, ORNL (+LLNL?) to evaluate GridFTP experience  LBNL, ANL PVFS/MPI-IO Hints  ANL, NWU Secondary/Tertiary Integration (pipelining)  ANL, NWU, LBNL, …

University of Chicago Department of Energy “Campaigns” Definition:  Multi-group efforts  Deliverables in the first year Candidates:  Hints through MPI-IOROMIOPVFS for applications  NetCDF MPI-IO (coord with climate apps)  Opportunities in HSM, WAN performance, others

University of Chicago Department of Energy Other Tasks Many enhancements to tools (PVFS, ROMIO, HRM) Investigations of APIs, strategies  Exploit applications for insight  Cross-cultural APIs (e.g., HSI, Grid IO, MPI- IO interactions) Year 2-3 tasks  Further enhancements (per proposal)  Opportunities: Secondary/Tertiary I/O pipelining