ANR CIGC LEGO (ANR-CICG-05-11) Bordeaux, 2006, December 11 th Automatic Application Deployment on Grids Landry Breuil, Boris Daix, Sébastien Lacour, Christian.

Slides:



Advertisements
Similar presentations
CPSCG: Constructive Platform for Specialized Computing Grid Institute of High Performance Computing Department of Computer Science Tsinghua University.
Advertisements

A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
Deployment of DIET and JuxMem using JDF: ongoing work Mathieu Jan Projet PARIS Rennes, 4 May 2004.
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
USING THE GLOBUS TOOLKIT This summary by: Asad Samar / CALTECH/CMS Ben Segal / CERN-IT FULL INFO AT:
Components for high performance grid programming in the GRID.it project 1 Workshop on Component Models and Systems for Grid Applications - St.Malo 26 june.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Dynamic adaptation of parallel codes Toward self-adaptable components for the Grid Françoise André, Jérémy Buisson & Jean-Louis Pazat IRISA / INSA de Rennes.
INTERNET DATABASE Chapter 9. u Basics of Internet, Web, HTTP, HTML, URLs. u Advantages and disadvantages of Web as a database platform. u Approaches for.
Components and Architecture CS 543 – Data Warehousing.
Workload Management Massimo Sgaravatto INFN Padova.
Architecture overview 6/03/12 F. Desprez - ISC Cloud Context : Development of a toolbox for deploying application services providers with a hierarchical.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
Christopher Jeffers August 2012
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
KARMA with ProActive Parallel Suite 12/01/2009 Air France, Sophia Antipolis Solutions and Services for Accelerating your Applications.
German National Research Center for Information Technology Research Institute for Computer Architecture and Software Technology German National Research.
1. 2 Purpose of This Presentation ◆ To explain how spacecraft can be virtualized by using a standard modeling method; ◆ To introduce the basic concept.
Abstractions: Programming and deploying apps. on Grids Franck Cappello INRIA* (*this is my own opinion!) CCGRID’08 - Panel.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
London April 2005 London April 2005 Creating Eyeblaster Ads The Rich Media Platform The Rich Media Platform Eyeblaster.
LEGO – Rennes, 3 Juillet 2007 Deploying Gfarm and JXTA-based applications using the ADAGE deployment tool Landry Breuil, Loïc Cudennec and Christian Perez.
Deploying DIET and JuxMem: GoDIET + JDF Mathieu Jan PARIS Research Group IRISA INRIA & ENS Cachan / Brittany Extension Rennes Lyon, July 2004.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
1 CCA Meeting, Januray 25th 2007 Supporting the Master-Worker Paradigm in the Common Component Architecture Hinde Lilia Bouziane, Christian Pérez, Thierry.
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
Jean-Sébastien Gay LIP ENS Lyon, Université Claude Bernard Lyon 1 INRIA Rhône-Alpes GRAAL Research Team Join work with DIET TEAM D istributed I nteractive.
Master Worker Paradigm Support in Software Component Models Hinde Bouziane, Christian Pérez PARIS Research Team INRIA/IRISA Rennes ANR CIGC LEGO (ANR-05-CICG-11)
Programming Parallel and Distributed Systems for Large Scale Numerical Simulation Application Christian Perez INRIA researcher IRISA Rennes, France.
Michael Still Google Inc. October, Managing Unix servers the slack way Tools and techniques for managing large numbers of Unix machines Michael.
Shannon Hastings Multiscale Computing Laboratory Department of Biomedical Informatics.
Building Hierarchical Grid Storage Using the GFarm Global File System and the JuxMem Grid Data-Sharing Service Gabriel Antoniu, Lo ï c Cudennec, Majd Ghareeb.
The JuxMem-Gfarm Collaboration Enhancing the JuxMem Grid Data Sharing Service with Persistent Storage Using the Gfarm Global File System Gabriel Antoniu,
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Laboratoire LIP6 The Gedeon Project: Data, Metadata and Databases Yves DENNEULIN LIG laboratory, Grenoble ACI MD.
Towards high-performance communication layers for JXTA on grids Mathieu Jan GDS meeting, Lyon, 17 February 2006.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Visualizing DIET and JuxMem Mathieu Jan PARIS Research Group IRISA INRIA & ENS Cachan / Brittany Extension Rennes Lyon, July 2004.
Latest news on JXTA and JuxMem-C/DIET Mathieu Jan GDS meeting, Rennes, 11 march 2005.
George Tsouloupas University of Cyprus Task 2.3 GridBench ● 1 st Year Targets ● Background ● Prototype ● Problems and Issues ● What's Next.
1 P-GRADE Portal: a workflow-oriented generic application development portal Peter Kacsuk MTA SZTAKI, Hungary Univ. of Westminster, UK.
National Center for Supercomputing ApplicationsNational Computational Science Grid Packaging Technology Technical Talk University of Wisconsin Condor/GPT.
Going Large-Scale in P2P Experiments Using the JXTA Distributed Framework Mathieu Jan & Sébastien Monnet Projet PARIS Paris, 13 February 2004.
ORNL/IU Workshop on Computational Frameworks in Fusion Oak Ridge, TN, USA, 9 th 2005 Defining, Implementing, Executing and Deploying a Parallel Component.
DISCOGRID Sophia, mars 2006 The Padico Environment Christian Pérez PARIS Research Team, IRISA/INRIA, Rennes, France.
Hiding Grid Complexity Behind SSH Session Server framework Tomasz Kuczyński (1,2) 1) Poznan Supercomputing and Networking Center.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
Advanced Component Models ULCM & HLCM Julien Bigot, Hinde Bouziane, Christian Perez COOP Project Lyon, 9-10 mars 2010.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
Status of Globus activities Massimo Sgaravatto INFN Padova for the INFN Globus group
Design and implementation Chapter 7 – Lecture 1. Design and implementation Software design and implementation is the stage in the software engineering.
Personalizing Web Sites Nasrullah. Understanding Profile The ASP.NET application service that enables you to store and retrieve information about users.
Simulation of O2 offline processing – 02/2015 Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture Eugen Mudnić.
Denis Caromel1 OASIS Team INRIA -- CNRS - I3S -- Univ. of Nice Sophia-Antipolis -- IUF IPDPS 2003 Nice Sophia Antipolis, April Overview: 1. What.
OSCAR Symposium – Quebec City, Canada – June 2008 Proposal for Modifications to the OSCAR Architecture to Address Challenges in Distributed System Management.
Shaowen Wang 1, 2, Yan Liu 1, 2, Nancy Wilkins-Diehr 3, Stuart Martin 4,5 1. CyberInfrastructure and Geospatial Information Laboratory (CIGI) Department.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
Appium Studio Appium testing made easy at any scale.
Workload Management Workpackage
GWE Core Grid Wizard Enterprise (
Shaowen Wang1, 2, Yan Liu1, 2, Nancy Wilkins-Diehr3, Stuart Martin4,5
CRESCO Project: Salvatore Raia
Liang Chen Advisor: Gagan Agrawal Computer Science & Engineering
Overview of Workflows: Why Use Them?
Mats Rynge USC Information Sciences Institute
A Virtual Machine Monitor for Utilizing Non-dedicated Clusters
Presentation transcript:

ANR CIGC LEGO (ANR-CICG-05-11) Bordeaux, 2006, December 11 th Automatic Application Deployment on Grids Landry Breuil, Boris Daix, Sébastien Lacour, Christian Pérez PARIS Research Team INRIA/IRISA Rennes

How to Deploy my Application on Grid Resources? Visualization Homogeneous cluster SAN Homogeneous cluster LAN WAN Supercomputer Flow code Transport code Controller velocity scalars concentration

Manual Deployment  Discover available grid resources  Select grid resources for execution  OS, architecture compatibility  Map the application onto the selected resources  MPI processes  Components  Select compatible compiled executables  Upload and install executables, stage input files in  Launch processes on remote computers  Set configuration parameters of the application  Components' attributes  Network topology information Too complex!

Automatic Deployment  Automatic  Resource discovery  Execution node selection  File installation  Process launch  Application configuration  Hide application complexity  Hide grid complexity stop reading your e- mails! Grid

Generic Application Description  Translator  From specific to generic application description  Straightforward to write CCM Appl. Description MPI Appl. Description GridCCM Appl. Description CCM Appl. Description MPI Appl. Description GridCCM Appl. Description CCM Planner MPI Planner GridCCM Planner Plan Execution Generic Application Description Deployment Planner Deployment Plan Execution

Identification of the Steps of Automatic Deployment MPI Application DescriptionCCM Application Description Resource DescriptionGeneric Application DescriptionControl Parameters Deployment Planning Deployment Plan Execution Application Configuration Static Applications Deployment Tool

ADAGE Feature List  MPI, JXTA, CCM application descriptions  Network topology description  Generic application description  Simple control parameters, simple planner  Deployment plan execution (RSH/SSH)  Basic file transfer support  Aware of file visibility (~NFS)  Does not check for already available files  Redeployment support  Either re-play or add/remove elements

How Easy is it, in Practice?  As simple as A-B-C  adage-deploy–resource –application my_appl.zip –ctrl_param my_control_parameters  Grid resource description  Written once for all by grid admins  Optional control_parameters constraints  Keep control on the deployment process

ADAGE & Grid5000  ADAGE is not able to directly talk with OAR  PERL script to automatically generate resource description from OARgrid id  oarstat2resources.pl –g 6332 –outputfile r.xml  adage-deploy –inres r.xml …

Some ADAGE results  JXTA  1 peer == 1 process  M. Jan deployed  ~ peers on ~ 145 machines in ~ 95 seconds Most of the time take by ssh commands  CCM  1 component == 1 process  H. Bouziane deployed  4000 components on 974 processors machines 432 machines on several G5K sites  Enable to CCM-plugin optimization

Towards ADAGE v2  Complete rewrite of ADAGE  By L. Breuil  Objectives of ADAGE v2  Provide a clean code architecture  Ease the additions of  Programming model (GridCCM), ie plugins  Planner  Better back-end support  GAT, Taktuk, …  Support of “dynamic” applications  In cooperation with B. Daix, PhD

Discussion  ADAGE: a model to support automatic application deployment  Ongoing/Future work  Finish ADAGE v2 (~ begining 2007 )  Stabilize plugins API  Stabilize planner API wrt to resources  Dynamic application management  PadicoTM support?  Adapt application description in function of resources  DIET support  Not directly targeted  Fault tolerance  Application monitoring