Large-scale accelerator simulations: Synergia on the Grid turn 1 turn 27 turn 19 turn 16 C++ Synergia Field solver (FFT, multigrid) Field solver (FFT,

Slides:



Advertisements
Similar presentations
National Institute of Advanced Industrial Science and Technology Ninf-G - Core GridRPC Infrastructure Software OGF19 Yoshio Tanaka (AIST) On behalf.
Advertisements

Beowulf Supercomputer System Lee, Jung won CS843.
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
RCAC Research Computing Presents: DiaGird Overview Tuesday, September 24, 2013.
ProActive Task Manager Component for SEGL Parameter Sweeping Natalia Currle-Linde and Wasseim Alzouabi High Performance Computing Center Stuttgart (HLRS),
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Virtual Machines for HPC Paul Lu, Cam Macdonell Dept of Computing Science.
© , Michael Aivazis DANSE Software Issues Michael Aivazis California Institute of Technology DANSE Software Workshop September 3-8, 2003.
Asynchronous Solution Appendix Eleven. Training Manual Asynchronous Solution August 26, 2005 Inventory # A11-2 Chapter Overview In this chapter,
Swift: A Scientist’s Gateway to Campus Clusters, Grids and Supercomputers Swift project: Presenter contact:
Introduction Status of SC simulations at CERN
Arc: Programming Options Dr Andy Evans. Programming ArcGIS ArcGIS: Most popular commercial GIS. Out of the box functionality good, but occasionally: You.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
An Extensible Python User Environment Jeff Daily Karen Schuchardt, PI Todd Elsethagen Jared Chase H41G-0956 Website Acknowledgements.
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
LOGO Scheduling system for distributed MPD data processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Appendix A Starting Out with Windows PowerShell™ 2.0.
This material is based upon work supported by the U.S. Department of Energy Office of Science under Cooperative Agreement DE-SC Michigan State.
MaterialsHub - A hub for computational materials science and tools.  MaterialsHub aims to provide an online platform for computational materials science.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
R. Ryne, NUG mtg: Page 1 High Energy Physics Greenbook Presentation Robert D. Ryne Lawrence Berkeley National Laboratory NERSC User Group Meeting.
Programming for Geographical Information Analysis: Advanced Skills Lecture 1: Introduction Programming Arc Dr Andy Evans.
PNPI HEPD seminar 4 th November Andrey Shevel Distributed computing in High Energy Physics with Grid Technologies (Grid tools at PHENIX)
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
4/20/02APS April Meeting1 Database Replication at Remote sites in PHENIX Indrani D. Ojha Vanderbilt University (for PHENIX Collaboration)
Stuart Wakefield Imperial College London Evolution of BOSS, a tool for job submission and tracking W. Bacchi, G. Codispoti, C. Grandi, INFN Bologna D.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
1 Large-Scale Profile-HMM on the Grid Laurent Falquet Swiss Institute of Bioinformatics CH-1015 Lausanne, Switzerland Borrowed from Heinz Stockinger June.
December 8 & 9, 2005, Austin, TX SURA Cyberinfrastructure Workshop Series: Grid Technology: The Rough Guide Grid Enabling Applications for the Grid: ENDYNE.
Status of Standalone Inspiral Code Duncan Brown University of Wisconsin-Milwaukee LIGO Scientific Collaboration Inspiral Working Group LIGO-G Z.
Algorithmic Finance and Tools for Grid Execution (the Swift Grid Scripting/Workflow tool) Tiberiu (Tibi) Stef-Praun.
1 1 What does Performance Across the Software Stack mean?  High level view: Providing performance for physics simulations meaningful to applications 
 Advanced Accelerator Simulation Panagiotis Spentzouris Fermilab Computing Division (member of the SciDAC AST project)
A scalable and flexible platform to run various types of resource intensive applications on clouds ISWG June 2015 Budapest, Hungary Tamas Kiss,
Holding slide prior to starting show. Applications WG Jonathan Giddy
© Wiley Inc All Rights Reserved. MCSE: Windows Server 2003 Active Directory Planning, Implementation, and Maintenance Study Guide, Second Edition.
February 22-23, Washington D.C. SURA ENDyne Software for Dynamics of Electrons and Nuclei in Molecules. Developed by Dr. Yngve Öhrn and Dr. Erik Deumens,
AMH001 (acmse03.ppt - 03/7/03) REMOTE++: A Script for Automatic Remote Distribution of Programs on Windows Computers Ashley Hopkins Department of Computer.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Accelerator Simulation in the Computing Division Panagiotis Spentzouris.
Batch Jobs Using the batch job functions. Use [Bulk Changes][Batch Job Utility] to start. Read the information panel. Check with TAMS Technical Support.
HUBbub 2013: Developing hub tools that submit HPC jobs Rob Campbell Purdue University Thursday, September 5, 2013.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
TRIUMF HLA Development High Level Applications Perform tasks of accelerator and beam control at control- room level, directly interfacing with operators.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
Software. Introduction n A computer can’t do anything without a program of instructions. n A program is a set of instructions a computer carries out.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Simulation Production System Science Advisory Committee Meeting UW-Madison March 1 st -2 nd 2007 Juan Carlos Díaz Vélez.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
Tech-X Corporation Status of the next generation of Synergia Interface and Refactoring Douglas R. Dechow.
 Accelerator Simulation P. Spentzouris Accelerator activity coordination meeting 03 Aug '04.
INTRODUCTION TO HADOOP. OUTLINE  What is Hadoop  The core of Hadoop  Structure of Hadoop Distributed File System  Structure of MapReduce Framework.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
1 An unattended, fault-tolerant approach for the execution of distributed applications Manuel Rodríguez-Pascual, Rafael Mayo-García CIEMAT Madrid, Spain.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Compute and Storage For the Farm at Jlab
Hadoop Aakash Kag What Why How 1.
U.S. ATLAS Grid Production Experience
Example: Rapid Atmospheric Modeling System, ColoState U
MaterialsHub - A hub for computational materials science and tools.
Unit I Flash Cards Start.
TeraScale Supernova Initiative
Presentation transcript:

Large-scale accelerator simulations: Synergia on the Grid turn 1 turn 27 turn 19 turn 16 C++ Synergia Field solver (FFT, multigrid) Field solver (FFT, multigrid) single particle optics/utilities single particle optics/utilities wrapper/job control wrapper/job control glue input & lattice (MAD) input & lattice (MAD) analysis tools analysis tools results beam studies beam studies Python Fortran 90 C++ Octave, C++ software simulations data

Synergia ● Simulate multi-particle physics in accelerators ● Computationally intensive – 1-10's of millions of macro particles – 10's of thousands (or more) of PDE solves ● Massively parallel – Clusters and supercomputers ● 64-node Linux cluster typical ● 512 processors at NERSC C++ Synergia Field solver (FFT, multigrid) Field solver (FFT, multigrid) single particle optics/utilities single particle optics/utilities wrapper/job control wrapper/job control glue input & lattice (MAD) input & lattice (MAD) analysis tools analysis tools results beam studies beam studies Python Fortran 90 C++ Octave, C++

Running Synergia (1)Few-person collaboration (2)Simulations require complex input parameters (3)Output consists of many files (4)Need to take advantage of computing resources wherever they are available Grid computing is the answer for (4), but the increase in complexity arising from (2) and (3) has to be mitigated. The tools provided by Synergia allow the scientist to do science without getting bogged down by bookkeeping.

Computing on the Grid ● Scientist uses local resources for most tasks ● Remote systems used for computationally-intensive tasks only ● In our case, the computationally intensive tasks are running the simulations and some analysis job export job creation job DB analysis tools import results import results

Job creation ● Python-based system – Python not required on target site ● Job contains – Batch input ● created from template – Input files ● user-defined – Utilities ● clean output, pack output – Description ● human and machine readable Job directory batch file input files utility scripts description Goal is reproducibility

Managing job options ● Python module for command-line options ● Groups of options can be composed – General Synergia options – Batch options – Application-specific options – etc. ● Command-line is stored for cut-and-paste modification ● Automatic command-line help generation ● Automatic human- readable summary ● Options for created jobs can be added to database

Job database Job information is automatically entered in spreadsheet.

Results ● The measure of a scientific computing project is the science it produces ● The Synergia infrastructure has allowed us to produce more science with less time wasted on tedious tasks – Better utilization of resources – Less time spent bookkeeping – Fewer redundant simulation runs

The measure of a scientific computing project is the science it produces Fermilab Booster Accelerator