Development Timelines Ken Kennedy Andrew Chien Keith Cooper Ian Foster John Mellor-Curmmey Dan Reed.

Slides:



Advertisements
Similar presentations
Introduction to Grid Application On-Boarding Nick Werstiuk
Advertisements

SALSA HPC Group School of Informatics and Computing Indiana University.
EU-GRID Work Program Massimo Sgaravatto – INFN Padova Cristina Vistoli – INFN Cnaf as INFN members of the EU-GRID technical team.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Problem-Solving Environments: The Next Level in Software Integration David W. Walker Cardiff University.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Distributed Application Management Using PLuSH Jeannie Albrecht, Christopher Tuttle, Alex C. Snoeren, and Amin Vahdat UC San Diego CSE {jalbrecht, ctuttle,
Achieving Application Performance on the Computational Grid Francine Berman This presentation will probably involve audience discussion, which will create.
Vladimir Litvin, Harvey Newman, Sergey Schevchenko Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum,
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
CASE Tools CIS 376 Bruce R. Maxim UM-Dearborn. Prerequisites to Software Tool Use Collection of useful tools that help in every step of building a product.
Workload Management Massimo Sgaravatto INFN Padova.
Cross Cluster Migration Remote access support Adianto Wibisono supervised by : Dr. Dick van Albada Kamil Iskra, M. Sc.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Vladimir Litvin, Harvey Newman Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum, Jamie Frey Wisconsin.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
GrADS Program Preparation System: Issues and Ideas Keith Cooper, Ken Kennedy, John Mellor-Crummey, Linda Torczon Center for High-Performance Software Rice.
The Grid is a complex, distributed and heterogeneous execution environment. Running applications requires the knowledge of many grid services: users need.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
The MicroGrid: A Scientific Tool for Modeling Grids Andrew A. Chien SAIC Chair Professor Department of Computer Science and Engineering University of California,
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Grid Job and Information Management (JIM) for D0 and CDF Gabriele Garzoglio for the JIM Team.
NeSC Apps Workshop July 20 th, 2002 Customizable command line tools for Grids Ian Kelley + Gabrielle Allen Max Planck Institute for Gravitational Physics.
Transparent Grid Enablement Using Transparent Shaping and GRID superscalar I. Description and Motivation II. Background Information: Transparent Shaping.
Computational grids and grids projects DSS,
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
The Limitation of MapReduce: A Probing Case and a Lightweight Solution Zhiqiang Ma Lin Gu Department of Computer Science and Engineering The Hong Kong.
LIGO-G9900XX-00-M ITR 2003 DMT Sub-Project John G. Zweizig LIGO/Caltech.
The Globus Project: A Status Report Ian Foster Carl Kesselman
ETICS All Hands meeting Bologna, October 23-25, 2006 NMI and Condor: Status + Future Plans Andy PAVLO Peter COUVARES Becky GIETZEL.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Grid and Cloud Computing Globus Provision Dr. Guy Tel-Zur.
Software Support for High Performance Problem Solving on the Grid An Overview of the GrADS Project Sponsored by NSF NGS Ken Kennedy Center for High Performance.
Tools for collaboration How to share your duck tales…
High Performance Computing on the Grid: Is It for You? With a Discussion of Help on the Way (the GrADS Project) Ken Kennedy Center for High Performance.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Globus Toolkit Massimo Sgaravatto INFN Padova. Massimo Sgaravatto Introduction Grid Services: LHC regional centres need distributed computing Analyze.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Evolution of the GrADS Software Architecture and Lessons Learned Fran Berman UCSD CSE and SDSC/NPACI.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Cooperative experiments in VL-e: from scientific workflows to knowledge sharing Z.Zhao (1) V. Guevara( 1) A. Wibisono(1) A. Belloum(1) M. Bubak(1,2) B.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Automatic Statistical Evaluation of Resources for Condor Daniel Nurmi, John Brevik, Rich Wolski University of California, Santa Barbara.
Answers to Questions CGrADS Response Session CGrADS PIs
GraDS MacroGrid Carl Kesselman USC/Information Sciences Institute.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Bulk Data Transfer Activities We regard data transfers as “first class citizens,” just like computational jobs. We have transferred ~3 TB of DPOSS data.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
OSSIM Technology Overview Mark Lucas. “Awesome” Open Source Software Image Map (OSSIM)
© Geodise Project, University of Southampton, Workflow Support for Advanced Grid-Enabled Computing Fenglian Xu *, M.
Run-time Adaptation of Grid Data Placement Jobs George Kola, Tevfik Kosar and Miron Livny Condor Project, University of Wisconsin.
Developing GRID Applications GRACE Project
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF Cluman: Advanced Cluster Management for Large-scale Infrastructures.
MicroGrid Update & A Synthetic Grid Resource Generator Xin Liu, Yang-suk Kee, Andrew Chien Department of Computer Science and Engineering Center for Networked.
Servicing Seismic and Oil Reservoir Simulation Data through Grid Data Services Sivaramakrishnan Narayanan, Tahsin Kurc, Umit Catalyurek and Joel Saltz.
VGES Demonstrations Andrew A. Chien, Henri Casanova, Yang-suk Kee, Richard Huang, Dionysis Logothetis, and Jerry Chou CSE, SDSC, and CNS University of.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Workload Management Workpackage
Abstract Machine Layer Research in VGrADS
Genre1: Condor Grid: CSECCR
Wide Area Workload Management Work Package DATAGRID project
Presentation transcript:

Development Timelines Ken Kennedy Andrew Chien Keith Cooper Ian Foster John Mellor-Curmmey Dan Reed

What We Have Today Prototype Testbeds with middleware Prototype of execution components in ScaLAPACK and Cactus Design for the execution environment —Implementation of resource specification (AART) —Interfaces for Scheduler/Resource Negotiator –Prototype resource scheduler (Dail) –Prototype renegotiator (Sievert) —Prototype Contract Monitoring System Grid ready libraries with performance models Tools for extracting information from program executables —For performance estimation on single nodes DSL design for Signal/Image Processing

What We Will Have on CGrADS Day One Prototype Execution System for Heterogeneous Clusters —Prototype Scheduler/Resource Negotiator —Binder —Run-time system —Contract monitor (working together) ScaLAPACK re-implemented using the prototype Cactus using generic resource selector Prototype automatic performance modeling for black boxes Script-based application composition (without optimization) Testbeds —Microgrid running ScaLAPACK and Cactus —Integrated NWS and prediction in MacroGrid and configuration tools

Currently Targeted Application Milestones Three codes concurrently, each engaged for two-three years 2002 —Cactus: traditional PDE solver, aggressive application scenarios —CAPS: dynamic data acquisition and real-time data ingest —ChemEng Workbench: application service scenarios prototypes 2003 —Cactus: by now transitioned to operational use by application group —CAPS: adaptive execution for high-speed prediction —ChemEng Workbench: application service scenarios operational —CMS/GriPhyN: query estimation and dynamic scheduling —BIRN-like distributed bioscience: emergent behavior issues 2004 —CAPS: by now transitioned to operational use by application group —CMS/GriPhyN: large-scale experimentation in production settings —NEES: application service and real-time data analysis scenarios

NCSA Linux Cluster 5) Secondary reports complete to master Master Condor job running at Caltech 7) GridFTP fetches data from UniTree NCSA UniTree - GridFTP-enabled FTP server 4) 100 data files transferred via GridFTP, ~ 1 GB each Secondary Condor job on WI pool 3) 100 Monte Carlo jobs on Wisconsin Condor pool 2) Launch secondary job on WI pool; input files via Globus GASS Caltech workstation 6) Master starts reconstruction jobs via Globus jobmanager on cluster 8) Processed objectivity database stored to UniTree 9) Reconstruction job reports complete to master CMS Data Reconstruction Example Scott Koranda, Miron Livny, others

Program Preparation System Milestones 2002 —Preliminary automated support for performance models ( black boxes ) —Binder includes local optimization, inserts probes and actuators —Prototype DSL for signal processing —Evaluate original COP design 2003 —Binder support for global optimization —Experiment with contract monitoring/reporting in applications —Evaluate and extend DSL support for signal processing 2004 —First dynamic optimizer prototype, plan for retargeting Binder —Initial telescoping language prototype based on experience with signal processing DSL —Whole Program Compiler generates initial COPs

Program Execution System Milestones 2002 —Virtual organization management tools —Resource selector and application manager prototypes —Temporal contract violation specification —Rescheduling models including distribution costs & info quality 2003 —Integrated resource monitoring and prediction prototype —Resource selector and application manager tools —Composable contract specification and tools —Reconfigurable object program specification —Scheduling models for highly parallel and data Grid applications 2004 —Enhanced application and resource measurement infrastructure —Enhanced resource scheduling infrastructure with adv. resv., etc. —Performance economics for global resource scheduling

Execution Environment Milestones

Software Development Process Andrew A. Chien Department of Computer Science and Engineering University of California, San Diego

Software Development Process Development Efficiency versus Robust Output —Key is choosing where to live on this spectrum CGrADS Objectives —Ground breaking research —Rapid development of functionality and experimentation —Difficult to know what will become widely used –Hardening of selected components

Software Development Approach Experimental development with lightweight process Core development with close to full process Targeted hardening of what becomes core —Enable researchers inside and outside CGrADS to build on core —Software output not supported as a product

Software Development Process Project Software Manager defines and enforces process; drives progress Infrastructure —Revision control system (CVS,SourceSafe,Clearcase) –Code, documents —Defect tracking system (ClearQuest) —Design, documentation, coding guidelines —Software infrastructure Process —Lightweight Process —Industrial Process

Two Flavors of Process “Lightweight” Process (Research Strength) —Requirements and Design; Review —Implementation and Test; Review —Integration; Test —Iterative Improvement Full Process (Industrial Strength) —Requirements; Review —Design; Review —Implementation; Test and Review —Integration; Test —Iterative Improvement