Evolution of successful Forum for Computational Excellence (FCE) Pilot project – raising awareness for HEP response to rapid evolution of the computational.

Slides:



Advertisements
Similar presentations
 Copyright 2007 STI - INTERNATIONAL Semantic Technology Institute International PlanetData - Ensuring Impact.
Advertisements

ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
Presentation at WebEx Meeting June 15,  Context  Challenge  Anticipated Outcomes  Framework  Timeline & Guidance  Comment and Questions.
High-Performance Computing
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Materials by Design G.E. Ice and T. Ozaki Park Vista Hotel Gatlinburg, Tennessee September 5-6, 2014.
Life and Health Sciences Summary Report. “Bench to Bedside” coverage Participants with very broad spectrum of expertise bridging all scales –From molecule.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CF21) IRNC Kick-Off Workshop July 13,
Workshop Charge J.C. Wells & M. Sato Park Vista Hotel Gatlinburg, Tennessee September 5-6, 2014.
German Priority Programme 1648 Software for Exascale Computing.
May 17, Capabilities Description of a Rapid Prototyping Capability for Earth-Sun System Sciences RPC Project Team Mississippi State University.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
RDA Europe & National initiatives HILARY HANAHOE, TRUST-IT SERVICES, RDA SECRETARIAT & RDA EUROPE PROJECT COORDINATOR -
Lawrence Berkeley National Laboratory Kathy Yelick Associate Laboratory Director for Computing Sciences.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Welcome to HTCondor Week #14 (year #29 for our project)
© Heikki Topi Data Science and Computing Education ACM Education Council Portland, OR September 16-17, 2014 Heikki Topi, Bentley University.
Field Project Planning, Operations and Data Services Jim Moore, EOL Field Project Services (FPS) Mike Daniels, EOL Computing, Data and Software (CDS) Facility.
Sumit Kumar Archana Kumar Group # 4 CSE 591 : Virtualization and Cloud Computing.
Role of Deputy Director for Code Architecture and Strategy for Integration of Advanced Computing R&D Andrew Siegel FSP Deputy Director for Code Architecture.
Enabling Cloud and Grid Powered Image Phenotyping Nirav Merchant iPlant Collaborative
material assembled from the web pages at
Per Møldrup-Dalum State and University Library SCAPE Information Day State and University Library, Denmark, SCAPE Scalable Preservation Environments.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
24 April 2015 FY 2016 Budget Request to Congress for DOE’s Office of Science Dr. Patricia M. Dehmer Acting Director, Office of Science
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
Pascucci-1 Valerio Pascucci Director, CEDMAV Professor, SCI Institute & School of Computing Laboratory Fellow, PNNL Massive Data Management, Analysis,
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
Scalable Systems Software for Terascale Computer Centers Coordinator: Al Geist Participating Organizations ORNL ANL LBNL.
SCAPE Rainer Schmidt SCAPE Training Event September 16 th – 17 th, 2013 The British Library Building Scalable Environments Technologies and SCAPE Platform.
Bob Lucas University of Southern California Sept. 23, 2011 Transforming Geant4 for the Future Bob Lucas and Rob Roser USC and FNAL May 8, 2012.
1 1 Office of Science Jean-Luc Vay Accelerator Technology & Applied Physics Division Lawrence Berkeley National Laboratory HEP Software Foundation Workshop,
Computational Science as an enabler for sustainable FEW Systems Baskar Ganapathysubramanian Iowa State University NSF FEW Workshop: Oct 12-13, 2015, ISU.
National Strategic Computing Initiative
7. Grid Computing Systems and Resource Management
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
PanDA & BigPanDA Kaushik De Univ. of Texas at Arlington BigPanDA Workshop, CERN October 21, 2013.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Fire Emissions Network Sept. 4, 2002 A white paper for the development of a NSF Digital Government Program proposal Stefan Falke Washington University.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.
3May20111QCIF ENABLING RESEARCH HIGH PERFORMANCE INFRASTRUCTURE & SERVICES QUESTnet ARCS Tools Workshop 3May2011.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
CERN VISIONS LEP  web LHC  grid-cloud HL-LHC/FCC  ?? Proposal: von-Neumann  NON-Neumann Table 1: Nick Tredennick’s Paradigm Classification Scheme Early.
1 Kostas Glinos European Commission - DG INFSO Head of Unit, Géant and e-Infrastructures "The views expressed in this presentation are those of the author.
Overview + Digital Strategy + Interactive Engineering + Experience Design + Product Incubation + Data Visualization and Discovery + Data Management.
HEP Forum for Computational Excellence (HEP-FCE) Kick Off Meeting Salman Habib & Rob Roser (Co-Directors)
National Archives Center for Advanced Systems and Technologies (NCAST) The National Archives and Records Administration Welcome! Now What? Mark Conrad.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
1 This Changes Everything: Accelerating Scientific Discovery through High Performance Digital Infrastructure CANARIE’s Research Software.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Toward High Breakthrough Collaboration (HBC) Susan Turnbull Program Manager Advanced Scientific Computing Research March 4, 2009.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Day 2 Discussion Slides Rob and Salman Kickoff Meeting 1 September.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
EarthCube Sustaining the Geosciences for 21 st Century Challenges Credits: from top to bottom: NOAA Okeanos Explorer Program (CC BY-SA 2.0), NASA/Kathryn.
Centre of Excellence in Physics at Extreme Scales Richard Kenway.
Copenhagen 11 March 2015 Dias 1 Theme 2a: Media Tools — NetLab, a Research Infrastructure for Internet Studies Niels Brügger, Aarhus University Advisory.
Geoffrey Fox Panel Talk: February
Accessing the VI-SEEM infrastructure
Budget JRA2 Beneficiaries Description TOT Costs incl travel
Tools and Services Workshop
for the Offline and Computing groups
DOE Facilities - Drivers for Science: Experimental and Simulation Data
University of Technology
ExaO: Software Defined Data Distribution for Exascale Sciences
$1M a year for 5 years; 7 institutions Active:
FUTURE INFRASTRUCTURES
Presentation transcript:

Evolution of successful Forum for Computational Excellence (FCE) Pilot project – raising awareness for HEP response to rapid evolution of the computational landscape Transitioning from FCE to CCE: Center for Computational Excellence –More interactive and task-oriented with “laser focus” on emerging architectures in exascale and “Post Moore” computing environments. Focus area’s include Exascale systems – optimal configuration issues for HEP apps Intelligent networking Exascale infrastructure fabric (cyber, caching, data movement) Machine learning and other data-intensive analysis tools –Center would be co-located at ANL, LBNL, and FNAL Over the next year –Establish technical leads and initial projects, provide collaboration infrastructure and initiate summer programs –Incubate new technical projects Center for Computational Excellence (CCE) 1

FCE Highlights Over Last Year FCE Kickoff meeting at ANL last September to get organized Led an ASCR/HEP exascale computing review Published our 3 working group reports arXiv: Led the HEP portion of an ASCR workshop entitled Data Management, Visualization, and Analysis of Experimental and Observational Data (EOD) Workshop Established a web presence – Coordinated submission of 5 white papers for ASCR Exascale application call. Submitted a co-design center white paper Executed on 3 target areas –Mini-apps –Data Transfer Project –Software containers (and related work)

Mini Apps Mini-apps (Heroux/Barrett): Compact, self-contained proxies that adequately portray the computational loads and dataflow, but are not meant to be scientifically realistic HEP mini-app collection (thousands of lines of code each) will be a good way to develop a design suite for next-generation architectures and insure that the exascale landscape can handle our workflows Mini Apps developed for –Computational Cosmology (ANL Lead) –Lattice QCD (FNAL Lead) –CMS Workflow (FNAL Lead) –Geant4 (FNAL Lead) –Neutrinos (LArSoft) (FNAL Lead)

Software Containers Data Intensive Computing often require complex software stacks. Efficiently supporting “big Software” in an HPC environment offers many challenges Shifter – developed using Docker Technology” to support user-defined and user-provided application images for todays HPC machines Recently featured in HPC Wire and other media outlets Microboone now successfully piloting “shifter” technology to utilize HPC for part of its computing repertoire ESNET in conjunction with the big computing facilities have been working to improve data movement between facilities. Factor of 2 improvement already from what had existed Establishing protocols and tools to make petabyte/day transfers straight forward Networks

For HEP to execute its ambitious science program of the next decade, we needs exascale computing We are motivated to partner with ASCR to ensure success of our exascale program We need guidance on how to build the proper partnerships Final Thoughts 10/5/155