User-driven resource selection in GRID superscalar Last developments and future plans in the framework of CoreGRID Rosa M. Badia Grid and Clusters Manager.

Slides:



Advertisements
Similar presentations
A Lightweight Platform for Integration of Mobile Devices into Pervasive Grids Stavros Isaiadis, Vladimir Getov University of Westminster, London {s.isaiadis,
Advertisements

J0 1 Marco Ronchetti - Basi di Dati Web e Distribuite – Laurea Specialistica in Informatica – Università di Trento.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 12 Introduction to ASP.NET.
Programming a service Cloud Rosa M. Badia, Jorge Ejarque, Daniele Lezzi, Raul Sirvent, Enric Tejedor Grid Computing and Clusters Group Barcelona Supercomputing.
/ 1 N. Williams Grid Middleware Experiences Nadya Williams OCI Grid Computing, University of Zurich
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
11 Application of CSF4 in Avian Flu Grid: Meta-scheduler CSF4. Lab of Grid Computing and Network Security Jilin University, Changchun, China Hongliang.
Three types of remote process invocation
19 Copyright © 2005, Oracle. All rights reserved. Distributing Modular Applications: Developing Web Services.
18 Copyright © 2005, Oracle. All rights reserved. Distributing Modular Applications: Introduction to Web Services.
1 Copyright © 2005, Oracle. All rights reserved. Introducing the Java and Oracle Platforms.
17 Copyright © 2005, Oracle. All rights reserved. Deploying Applications by Using Java Web Start.
DC-API: Unified API for Desktop Grid Systems Gábor Gombás MTA SZTAKI.
Designing Services for Grid-based Knowledge Discovery A. Congiusta, A. Pugliese, Domenico Talia, P. Trunfio DEIS University of Calabria ITALY
Elton Mathias and Jean Michael Legait 1 Elton Mathias, Jean Michael Legait, Denis Caromel, et al. OASIS Team INRIA -- CNRS - I3S -- Univ. of Nice Sophia-Antipolis,
Grid Checkpoining Architecture Radosław Januszewski CoreGrid Summer School 2007.
Barcelona Supercomputing Center. The BSC-CNS objectives: R&D in Computer Sciences, Life Sciences and Earth Sciences. Supercomputing support to external.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Making the System Operational
NGS computation services: API's,
WS-JDML: A Web Service Interface for Job Submission and Monitoring Stephen M C Gough William Lee London e-Science Centre Department of Computing, Imperial.
OMII-UK Steven Newhouse, Director. © 2 OMII-UK aims to provide software and support to enable a sustained future for the UK e-Science community and its.
Auto-scaling Axis2 Web Services on Amazon EC2 By Afkham Azeez.
1 Communication in Distributed Systems REKs adaptation of Tanenbaums Distributed Systems Chapter 2.
Managing Web server performance with AutoTune agents by Y. Diao, J. L. Hellerstein, S. Parekh, J. P. Bigu Jangwon Han Seongwon Park
© 2009 VMware Inc. All rights reserved View Pool Image Configuration Considerations for Gold Images around Application virtualization and performance.
© 2011 TIBCO Software Inc. All Rights Reserved. Confidential and Proprietary. Towards a Model-Based Characterization of Data and Services Integration Paul.
GETTING STARTED WITH WINDOWS COMMUNICATION FOUNDATION 4.5 Ed Jones & Grey Guindon.
ABC Technology Project
INTRODUCTION TO SIMULATION WITH OMNET++ José Daniel García Sánchez ARCOS Group – University Carlos III of Madrid.
Chapter 11: The X Window System Guide To UNIX Using Linux Third Edition.
HORIZONT TWS/WebAdmin TWS/WebAdmin for Distributed
Squares and Square Root WALK. Solve each problem REVIEW:
Database System Concepts and Architecture
Executional Architecture
Implementation Architecture
Chapter 5 Test Review Sections 5-1 through 5-4.
Enhancing Spotfire with the Power of R
Macromedia Dreamweaver MX 2004 – Design Professional Dreamweaver GETTING STARTED WITH.
Addition 1’s to 20.
25 seconds left…...
Week 1.
We will resume in: 25 Minutes.
A1.1 Assignment 1 “Deploying a Simple Web Service” ITCS 4010/5010 Grid Computing, UNC-Charlotte B. Wilkinson, 2005.
From Model-based to Model-driven Design of User Interfaces.
Chapter 8 Improving the User Interface
COMP Superscalar: Bringing GRID superscalar and GCM together Enric Tejedor Universitat Politècnica de Catalunya V ProActive and GCM.
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Workload Management Massimo Sgaravatto INFN Padova.
Understanding and Managing WebSphere V5
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
The Glidein Service Gideon Juve What are glideins? A technique for creating temporary, user- controlled Condor pools using resources from.
Condor Tugba Taskaya-Temizel 6 March What is Condor Technology? Condor is a high-throughput distributed batch computing system that provides facilities.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Transparent Grid Enablement Using Transparent Shaping and GRID superscalar I. Description and Motivation II. Background Information: Transparent Shaping.
Grid Computing I CONDOR.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
ServiceSs, a new programming model for the Cloud Daniele Lezzi, Rosa M. Badia, Jorge Ejarque, Raul Sirvent, Enric Tejedor Grid Computing and Clusters Group.
Workshop on Grid Applications Programming, July 2004 GRID superscalar: a programming paradigm for GRID applications CEPBA-IBM Research Institute Raül Sirvent,
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Core Java Introduction Byju Veedu Ness Technologies httpdownload.oracle.com/javase/tutorial/getStarted/intro/definition.html.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
Cloud interoperability and elasticity with COMPSs Federated Cloud F2F Jan , Amsterdam Daniele Lezzi – Barcelona Supercomputing Center.
Workload Management Workpackage
Presentation transcript:

User-driven resource selection in GRID superscalar Last developments and future plans in the framework of CoreGRID Rosa M. Badia Grid and Clusters Manager Barcelona Supercomputing Center

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 2 Outline 1.GRID superscalar overview 2.User defined cost and constraints interface 3.Deployment of GRID superscalar applications 4.Run-time resource selection 5.Plans for CoreGRID

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 3 1.GRID superscalar overview Programming environment for the Grid Goals: –Grid as transparent as possible to the programmer Approach –Sequential programming (small changes from original code) –Specification of the Grid tasks –Automatic code generation to build Grid applications –Underlying run-time (resource, file, job management)

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 4 1.GRID superscalar overview: interface GS_On(); for (int i = 0; i < MAXITER; i++) { newBWd = GenerateRandom(); subst (referenceCFG, newBWd, newCFG); dimemas (newCFG, traceFile, DimemasOUT); post (newBWd, DimemasOUT, FinalOUT); if (i % 3 == 0) Display(FinalOUT); } fd = GS_Open(FinalOUT, R); printf("Results file:\n"); present (fd); GS_Close(fd); GS_Off();

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 5 1.GRID superscalar overview: interface void dimemas(in File newCFG, in File traceFile, out File DimemasOUT) { char command[500]; putenv("DIMEMAS_HOME=/usr/local/cepba-tools"); sprintf(command, "/usr/local/cepba-tools/bin/Dimemas -o %s %s", DimemasOUT, newCFG ); GS_System(command); } void display(in File toplot) { char command[500]; sprintf(command, "./display.sh %s", toplot); GS_System(command); }

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 6 1.GRID superscalar overview: interface interface MC { void subst (in File referenceCFG, in double newBW, out File newCFG); void dimemas (in File newCFG, in File traceFile, out File DimemasOUT); void post (in File newCFG, in File DimemasOUT, inout File FinalOUT); void display (in File toplot) };

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 7 1.GRID superscalar overview: code generation app.idl app-worker.capp.capp-functions.c worker gsstubgen app.h master app-stubs.c

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 8 for (int i = 0; i < MAXITER; i++) { newBWd = GenerateRandom(); substitute (nsend.cfg, newBWd, tmp.cfg); dimemas (tmp.cfg, trace.trf, output.txt); postprocess (newBWd, output.txt, final.txt); if(i % 3 == 0) display(final.txt); } T1 0 T2 0 T3 0 T4 0 T5 0 T1 1 T2 1 T3 1 T4 1 T5 1 T1 2 … 1.GRID superscalar overview: behaviour

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 9 1.GRID superscalar overview: runtime features Data dependence analysis File renaming Shared disks management File locality exploitation Resource brokering Task scheduling Task submission Checkpointing at task level Exception handling Current version over Globus 2.x, using the API File transfer, security, … provided by Globus Ongoing developments of versions: –Ninf-g2 –ssh/scp –GT4

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 10 2.User defined cost and constraints interface app.idl app-worker.capp.capp-functions.c worker gsstubgen app.h master app-stubs.c app_constraints.cc app_constraints_wrapper.cc app_constraints.h

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 11 2.User defined cost and constraints interface File app_constraints.cc contains the interface of functions for –Resource constraints specification –Performance cost estimation Sample default functions: string Subst_constraints(file referenceCFG, double seed, file newCFG) { string constraints = ""; return constraints; } double Subst_cost(file referenceCFG, double seed, file newCFG) { return 1.0; }

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 12 2.User defined cost and constraints interface Users can edit and specify constraints and performance cost for each function –Constraints syntax: Condor ClassAds –Performance cost syntax: pure C/C++ string Dimem_constraints(file cfgFile, file traceFile) { return "(member(\"Dimemas\", other.SoftNameList) && other.OpSys == \"Linux\" && other.Mem > 1024 )" } double Dimem_cost(file cfgFile, file traceFile) { double complexity, time; complexity = 10.0 * num_processes (traceFile) * no_p_to_p (traceFile) + no_collectives (traceFile) + no_machines(cfgFile); time = complexity / GS_GFlops(); return(time); }

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 13 3.Deployment of GRID superscalar applications Java based GUI Allows GRID resources specification: host details, libraries location… Selection of Grid configuration Grid configuration checking process: –Aliveness of host (ping) –Globus service is checked by submitting a simple test –Sends a remote job that copies the code needed in the worker, and compiles it Automatic deployment –Sends and compiles code in the remote workers and the master Configuration file generation

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 14 3.Deployment of GRID superscalar applications Resource specification (by Grid administrator) –Only one time – description of the GRID resources stored in hidden xml file

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 15 3.Deployment of GRID superscalar applications Project specification (by user) –Selection of hosts Afterwards application is automatically deployed A project configuration xml file is generated

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 16 4.Runtime resource selection Runtime evaluation of the functions –Constraints and performance cost functions dynamically drive the resource selection When an instance of a function is ready for execution –Constraint function is evaluated using ClassAdd library to match resource ClassAdds with task ClassAdds –Performance cost function used to estimate the ellapsed time of the function (ET)

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 17 4.Runtime resource selection For those resources r that meet constraints FT= File transfer time to resource r ET = Execution time of task on resource r

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 18 4.Runtime resource selection: call sequence app.c LocalHost app-functions.c

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 19 4.Runtime resource selection:call sequence app.c app-stubs.c GRID superscalar runtime app_constraints_wrapper.cc app_constraints.cc GT2 LocalHost RemoteHost app-functions.c app-worker.c

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 20 Grid-unaware Application 5.Plans for CoreGRID WP7 Integrated toolkit application meta-data repository app-level info-cache monitoring services information services resource management PSE user portal application manager Runtime environment steering/tuning component steering Grid-aware application

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 21 5.Plans for CoreGRID task 7.3 Leader: UPC Participants: INRIA, USTUTT, UOW, UPC, VUA, CYFRONET Objectives: –Specification and development of an Integrated Toolkit for the Generic platform

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 22 5.Plans for CoreGRID task 7.3 The integrated toolkit will –provide means for simplifying the development of Grid applications –allow executing the applications in the Grid in a transparent way –optimize the performance of the application

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 23 5.Plans for CoreGRID task 7.3: subtasks Design of a component oriented integrated toolkit –Applications basic requirements will be mapped to components – based on the generic platform (task 7.1) Definition of the interface and requirements with the mediator components –Tightly performed with the definition of the mediator components (task 7.2) Component communication mechanisms –Enhancement of the communication of integrated toolkit application components

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 24 5.Plans for CoreGRID task 7.3 Ongoing work –Study of partners projects –Definition of Roadmap –Integration PACX-MPI Configuration manager with GRID superscalar deployment center –Specification of GRID superscalar based on the component model

European Research Network on Foundations, Software Infrastructures and Applications for large scale distributed, GRID and Peer-to-Peer Technologies 25 Summary GRID superscalar has proven to be a good solution for programming grid-unaware applications New enhancements for resource selection are very promising Examples of extensions –Cost driven resource selection –Limitation of data movement (confidentiality preservation) Ongoing work in CoreGRID to integrate with component-based platforms and other partners tools