LEGO – Rennes, 3 Juillet 2007 Deploying Gfarm and JXTA-based applications using the ADAGE deployment tool Landry Breuil, Loïc Cudennec and Christian Perez.

Slides:



Advertisements
Similar presentations
Gfarm v2 and CSF4 Osamu Tatebe University of Tsukuba Xiaohui Wei Jilin University SC08 PRAGMA Presentation at NCHC booth Nov 19,
Advertisements

Sponsors and Acknowledgments This work is supported in part by the National Science Foundation under Grants No. OCI , IIP and CNS
Deployment of DIET and JuxMem using JDF: ongoing work Mathieu Jan Projet PARIS Rennes, 4 May 2004.
Windows® Deployment Services
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
1 Towards a Grid File System Based on a Large-Scale BLOB Management Service Viet-Trung Tran 1, Gabriel Antoniu 2, Bogdan Nicolae 3, Luc Bougé 1, Osamu.
Dynamic adaptation of parallel codes Toward self-adaptable components for the Grid Françoise André, Jérémy Buisson & Jean-Louis Pazat IRISA / INSA de Rennes.
GGF Toronto Spitfire A Relational DB Service for the Grid Peter Z. Kunszt European DataGrid Data Management CERN Database Group.
GLOMAR  Aims - Provides adaptive consistency control for mobile enabled file systems  Abstracting consistency control into a component architecture 
Grid Computing, B. Wilkinson, 20046c.1 Globus III - Information Services.
©Silberschatz, Korth and Sudarshan18.1Database System Concepts Centralized Systems Run on a single computer system and do not interact with other computer.
Module 14: Scalability and High Availability. Overview Key high availability features available in Oracle and SQL Server Key scalability features available.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Infiniband enables scalable Real Application Clusters – Update Spring 2008 Sumanta Chatterjee, Oracle Richard Frank, Oracle.
JuxMem: An Adaptive Supportive Platform for Data Sharing on the Grid Gabriel Antoniu, Luc Bougé, Mathieu Jan IRISA / INRIA & ENS Cachan, France Workshop.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
Deploying DIET and JuxMem: GoDIET + JDF Mathieu Jan PARIS Research Group IRISA INRIA & ENS Cachan / Brittany Extension Rennes Lyon, July 2004.
SeLeNe - Architecture George Samaras Kyriakos Karenos Larnaca – April 2003 THE UNIVERSITY OF CYPRUS.
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
Ohio State University Department of Computer Science and Engineering 1 Cyberinfrastructure for Coastal Forecasting and Change Analysis Gagan Agrawal Hakan.
The Data Grid: Towards an Architecture for the Distributed Management and Analysis of Large Scientific Dataset Caitlin Minteer & Kelly Clynes.
JTE - HPC File Systems: From Cluster To Grid October 3-4, 2007, IRISA, Rennes ACM SIGOPS France.
© Oxford University Press 2011 DISTRIBUTED COMPUTING Sunita Mahajan Sunita Mahajan, Principal, Institute of Computer Science, MET League of Colleges, Mumbai.
SUMA: A Scientific Metacomputer Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García,
IMDGs An essential part of your architecture. About me
Large-scale Deployment in P2P Experiments Using the JXTA Distributed Framework Gabriel Antoniu, Luc Bougé, Mathieu Jan & Sébastien Monnet PARIS Research.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Peer-to-Peer Distributed Shared Memory? Gabriel Antoniu, Luc Bougé, Mathieu Jan IRISA / INRIA & ENS Cachan/Bretagne France Dagstuhl seminar, October 2003.
High Performance File System Service for Cloud Computing Kenji Kobayashi, Osamu Tatebe University of Tsukuba, JAPAN.
Master Worker Paradigm Support in Software Component Models Hinde Bouziane, Christian Pérez PARIS Research Team INRIA/IRISA Rennes ANR CIGC LEGO (ANR-05-CICG-11)
Programming Parallel and Distributed Systems for Large Scale Numerical Simulation Application Christian Perez INRIA researcher IRISA Rennes, France.
Heterogeneous Database Replication Gianni Pucciani LCG Database Deployment and Persistency Workshop CERN October 2005 A.Domenici
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Andrew S. Budarevsky Adaptive Application Data Management Overview.
JuxMem: An Adaptive Supportive Platform for Data Sharing on the Grid Gabriel Antoniu, Luc Bougé, Mathieu Jan IRISA / INRIA & ENS Cachan, France Grid Data.
Building Hierarchical Grid Storage Using the GFarm Global File System and the JuxMem Grid Data-Sharing Service Gabriel Antoniu, Lo ï c Cudennec, Majd Ghareeb.
The JuxMem-Gfarm Collaboration Enhancing the JuxMem Grid Data Sharing Service with Persistent Storage Using the Gfarm Global File System Gabriel Antoniu,
Towards high-performance communication layers for JXTA on grids Mathieu Jan GDS meeting, Lyon, 17 February 2006.
Visualizing DIET and JuxMem Mathieu Jan PARIS Research Group IRISA INRIA & ENS Cachan / Brittany Extension Rennes Lyon, July 2004.
Latest news on JXTA and JuxMem-C/DIET Mathieu Jan GDS meeting, Rennes, 11 march 2005.
Towards Exascale File I/O Yutaka Ishikawa University of Tokyo, Japan 2009/05/21.
The Replica Location Service The Globus Project™ And The DataGrid Project Copyright (c) 2002 University of Chicago and The University of Southern California.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
Going Large-Scale in P2P Experiments Using the JXTA Distributed Framework Mathieu Jan & Sébastien Monnet Projet PARIS Paris, 13 February 2004.
ATLAS Grid Requirements A First Draft Rich Baker Brookhaven National Laboratory.
1 VLDB - Data Management in Grids B. Del-Fabbro, D. Laiymani, J.M. Nicod and L. Philippe Laboratoire d’Informatique de l’Université de Franche-Comté Séoul,
ANR CIGC LEGO (ANR-CICG-05-11) Bordeaux, 2006, December 11 th Automatic Application Deployment on Grids Landry Breuil, Boris Daix, Sébastien Lacour, Christian.
Advanced Component Models ULCM & HLCM Julien Bigot, Hinde Bouziane, Christian Perez COOP Project Lyon, 9-10 mars 2010.
November, 19th GDS meeting, LIP6, Paris 1 Hierarchical Synchronization and Consistency in GDS Sébastien Monnet IRISA, Rennes.
Rights Management for Shared Collections Storage Resource Broker Reagan W. Moore
IHP Im Technologiepark Frankfurt (Oder) Germany IHP Im Technologiepark Frankfurt (Oder) Germany ©
Distributed File System. Outline Basic Concepts Current project Hadoop Distributed File System Future work Reference.
ZOOKEEPER. CONTENTS ZooKeeper Overview ZooKeeper Basics ZooKeeper Architecture Getting Started with ZooKeeper.
Seminar On Rain Technology
CalvinFS: Consistent WAN Replication and Scalable Metdata Management for Distributed File Systems Thomas Kao.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
- Eddy Caron.
Services DFS, DHCP, and WINS are cluster-aware.
Slide credits: Thomas Kao
Distributed Network Traffic Feature Extraction for a Real-time IDS
University of Technology
Plethora: Infrastructure and System Design
Replication Middleware for Cloud Based Storage Service
IROP Research Presentation
Distributed P2P File System
Sky Computing on FutureGrid and Grid’5000
Outline Review of Quiz #1 Distributed File Systems 4/20/2019 COP5611.
Sky Computing on FutureGrid and Grid’5000
Presentation transcript:

LEGO – Rennes, 3 Juillet 2007 Deploying Gfarm and JXTA-based applications using the ADAGE deployment tool Landry Breuil, Loïc Cudennec and Christian Perez IRISA / INRIA, PARIS project-team

LEGO - Rennes, 3 Juillet Data sharing for grid-based applications  Numerical simulations, collaborative design, distributed databases, code- coupled applications  Desirable features: transparency, consistency, uniform access, persistency Grid data sharing service (JuxMem, INRIA/IRISA)  Transparent access to replicated data  Fault tolerance and consistency mechanisms  Data stored in physical memory Global distributed file system (Gfarm, AIST/Univ.Tsukuba)  File fragmentation and replication  Smart replica selection  Data stored on disk: secondary persistent storage Motivations (1/2) Offering persistent storage to a grid data sharing service

LEGO - Rennes, 3 Juillet Motivations (2/2) Proposition of common architecture (#1 out of 3 propositions) Cluster #1Cluster #2 JuxMem Global Data Group (GDG) JuxMem Provider GDG Leader JuxMem Provider JuxMem Provider GFarm GFSD One particular JuxMem provider (GDG leader) flushes data to Gfarm Then, other Gfarm copies can be created using Gfarm’s gfrep command

LEGO - Rennes, 3 Juillet Deploying the Gfarm file system  4 roles: metadata server, metadata cache server, file system node and client node  Dependencies between roles: Tree-based a role should start after its father Deploying the JuxMem service (JXTA-based application)  2 types of peers: rendezvous and edges  Dependencies between peers: Tree-based a peer should start after its rendezvous peer Deployment constraints  Gfarm should be launched before JuxMem  The Gfarm client should share the same node than the JuxMem provider Generic deployment tool ADAGE (INRIA/IRISA) Towards a mixed deployment Using the ADAGE deployment tool

LEGO - Rennes, 3 Juillet Gfarm file system (1/2) Tree-based dependencies server cache client storage client storage Server IP, tcp port Cache IP, tcp port Server IP, tcp port Cache IP, tcpport Configuration file

LEGO - Rennes, 3 Juillet Gfarm file system (2/2) Describing the application <server name="metadata_server" port_master="10602" port_gfmd="10601" port_gfsd="10600" /> <proc name="metadata_cache_server" role="agent" cardinality="1" /> <proc name="fs_node" role="gfsd" cardinality="1" agent="metadata_cache_server" /> <proc name="gf_client" role="client" binary="true" cardinality="1" agent="metadata_cache_server" /> Defining tcp ports for Gfarm roles Naming this metadata cache server Defining which cache server to connect with Binary to execute on the client node

LEGO - Rennes, 3 Juillet JuxMem (1/3) is a JXTA-based application edge rdv edge rdv edge PlatformConfig file: tcp port, IP and tcp port of seed

LEGO - Rennes, 3 Juillet JuxMem (2/3) Describing the JXTA application <edge id="provider" profile_name="p_provider » cardinality="1" rdv="manager" /> <edge id="writer" profile_name="p_writer » cardinality="1" rdv="manager" /> <edge id="reader" profile_name="p_reader » cardinality="1" rdv="manager" /> Defining a profile for the manager Rendezvous instanciation Edge instanciation Seed name Langage: JDL (Mathieu Jan)

LEGO - Rennes, 3 Juillet JuxMem (3/3) Deploying step-by-step edge1 edge2 node1 node2 rdv ADAGE Topology edge3 Rdv: 9701 rdv Writing PlatformConfig Waiting peer up Callback=9701

LEGO - Rennes, 3 Juillet JuxMem (3/3) Deploying step-by-step, dynamic tcp ports assignation edge1 edge2 node1 node2 rdv ADAGE Topology edge3 Rdv: 9701 Edge2: 9703 Edge2(seed=node1:9701) Callback=9703 Edge1(seed=node1:9701) Edge3(seed=node1:9701) Callback=9701 Callback=9705 Edge1: 9701 Edge3: 9705

LEGO - Rennes, 3 Juillet Meta plugin (1/2) Mixed Architecture cluster group GFarm client GFarm file system node JuxMem Client JuxMem manager JuxMem Provider GFarm server GFarm agent

LEGO - Rennes, 3 Juillet Meta plugin (2/2) Describing the mixed architecture <specific id="gfarm-desc » path="./tests/gfarm-appl.xml"/> <specific id="juxmem-desc » path="./tests/jxta-appl.xml"/> gf_client_0 Pointing to the Gfarm description Pointing to the JuxMem description Expressing ack dependency Expressing colocation <plugin name="JXTA » file="tests/meta-gfarm-jxta-ctrl-params-spec.xml"/>

LEGO - Rennes, 3 Juillet Some ADAGE features used by these plugins  File transfer  Wait for file creation  Callback mechanism Some experimentations  Deploying up to 40 Gfarm roles  Adapting the JuxMem basic tutorial to ADAGE Future work  Using large scale configurations Towards a mixed deployment Using the ADAGE deployment tool