The OurGrid Project Walfredo Cirne Universidade Federal de Campina Grande.

Slides:



Advertisements
Similar presentations
Grids for Complex Problem Solving, 29 January 2003 Grid based collaborative working in large distributed organisations
Advertisements

1 Bogotá, EELA-2 1 st Conference, On the Co-existence of Service and Opportunistic Grids Francisco Brasileiro Universidade Federal.
MyGrid: A User-Centric Approach for Grid Computing Walfredo Cirne Universidade Federal da Paraíba.
The OurGrid Project Walfredo Cirne Universidade Federal de Campina Grande.
1 31 August, 2007 ICSY Lab, University of Kaiserslautern, Germany A File System Service for the Venice Service Grid 33 rd Euromicro 28-31August 2007 Lübeck,
Running Thor over MyGrid Walfredo Cirne Universidade Federal de Campina Grande.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Grids and Grid Technologies for Wide-Area Distributed Computing Mark Baker, Rajkumar Buyya and Domenico Laforenza.
Labs of The World, Unite!!! Walfredo Cirne Universidade Federal de Campina Grande.
Cambodia-India Entrepreneurship Development Centre - : :.... :-:-
Virtualization for Cloud Computing
Installing software on personal computer
Massive Ray Tracing in Fusion Plasmas on EGEE J.L. Vázquez-Poletti, E. Huedo, R.S. Montero and I.M. Llorente Distributed Systems Architecture Group Universidad.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Virtualization Lab 3 – Virtualization Fall 2012 CSCI 6303 Principles of I.T.
Connecting OurGrid & GridSAM A Short Overview. Content Goals OurGrid: architecture overview OurGrid: short overview GridSAM: short overview GridSAM: example.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
Nicholas LoulloudesMarch 3 rd, 2009 g-Eclipse Testing and Benchmarking Grid Infrastructures using the g-Eclipse Framework Nicholas Loulloudes On behalf.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
E-science grid facility for Europe and Latin America OurGrid E2GRIS1 Rafael Silva Universidade Federal de Campina.
Miguel Branco CERN/University of Southampton Enabling provenance on large-scale e-Science applications.
Computational grids and grids projects DSS,
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
OurGrid: A Simple Solution for Running Bag-of-Tasks Applications on Grids Marcelo Meira, Walfredo Cirne (marcelo, Universidade.
1 Distributed Energy-Efficient Scheduling for Data-Intensive Applications with Deadline Constraints on Data Grids Cong Liu and Xiao Qin Auburn University.
EFFECTIVE LOAD-BALANCING VIA MIGRATION AND REPLICATION IN SPATIAL GRIDS ANIRBAN MONDAL KAZUO GODA MASARU KITSUREGAWA INSTITUTE OF INDUSTRIAL SCIENCE UNIVERSITY.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Alexandre Duarte Gustavo Wagner Francisco Brasileiro Walfredo Cirne Multi-Environment Software Testing on the Grid Universidade Federal de Campina Grande.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
1 Catania, 4 th EEGE User Forum/OGF 25, OurGrid integration with gLite based grids in EELA-2 Francisco Brasileiro Universidade.
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
Introduction to Grid Computing Ed Seidel Max Planck Institute for Gravitational Physics
1 Bogotá, EELA-2 1 st Conference, The OurGrid Approach for Opportunistic Grid Computing Francisco Brasileiro Universidade Federal.
E-science grid facility for Europe and Latin America Bridging the High Performance Computing Gap with OurGrid Francisco Brasileiro Universidade.
Authors: Ronnie Julio Cole David
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
E-science grid facility for Europe and Latin America OurGrid and the co-existence with gLite Alexandre Duarte Universidade Federal de Campina.
E-infrastructure shared between Europe and Latin America Interoperability between EELA and OurGrid Alexandre Duarte CERN and UFCG 1 st.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 14 February 2006.
Uppsala, April 12-16th 2010EGEE 5th User Forum1 A Business-Driven Cloudburst Scheduler for Bag-of-Task Applications Francisco Brasileiro, Ricardo Araújo,
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
E-infrastructure shared between Europe and Latin America Interoperability between EELA and OurGrid Alexandre Duarte CERN IT-GD EELA Project.
Project Initium: Remote Job Submission Design and Security Infrastructure Pawel Krepsztul MS Thesis Presentation A Thesis Presentation submitted to the.
E-science grid facility for Europe and Latin America JRA1 role and its interaction with SA1 and NA3 Francisco Brasileiro Universidade Federal.
1 Cloud Services Requirements and Challenges of Large International User Groups Laurence Field IT/SDC 2/12/2014.
ETICS An Environment for Distributed Software Development in Aerospace Applications SpaceTransfer09 Hannover Messe, April 2009.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Virtual Server Server Self Service Center (S3C) JI July.
Campus grids: e-Infrastructure within a University Mike Mineter National e-Science Centre 22 February 2006.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
SCI-BUS project Pre-kick-off meeting University of Westminster Centre for Parallel Computing Tamas Kiss, Stephen Winter, Gabor.
Project Initium: Remote Job Submission Pawel Krepsztul Fairfield University Electrical and Computer Engineering program.
8 th International Desktop Grid Federation Workshop, Hannover, Germany, August 17 th, 2011 DEGISCO Desktop Grids for International Scientific Collaboration.
Chapter 6: Securing the Cloud
Deploying Research in the Real World: The OurGrid Experience
Example: Rapid Atmospheric Modeling System, ColoState U
Management of Virtual Machines in Grids Infrastructures
Grid Computing.
Interoperability & Standards
Management of Virtual Machines in Grids Infrastructures
1. 2 VIRTUAL MACHINES By: Satya Prasanna Mallick Reg.No
ShareGrid: architettura e middleware
Exploring Multi-Core on
Presentation transcript:

The OurGrid Project Walfredo Cirne Universidade Federal de Campina Grande

eScience Computers are changing scientific research −Enabling collaboration −As investigation tools (simulations, data mining, etc...) As a result, many research labs around the world are now computation hungry Buying more computers is just part of answer Better using existing resources is the other

Solution 1: Globus Grids promise “plug on the wall and solve your problem” Globus is the closest realization of such vision −Deployed for dozens of sites But it requires highly-specialized skills and complex off-line negotiation Good solution for large labs that work in collaboration with other large labs −CERN’s LCG is a good example of state-of-art

Solution 2: Voluntary Computing have been a great success, harnessing the power of millions of computers However, to use this solution, you must −have a very high visibility project −be in a well-known institution −invest a good deal of effort in “advertising”

And what about the thousands of small and middle research labs throughout the world which also need lots of compute power?

Solution 3: OurGrid OurGrid is a peer-to-peer grid Each lab correspond to a peer in the system OurGrid is easy to install and automatically configures itself Labs can freely join the system without any human intervention To keep it doable, we focus on Bag-of-Tasks application

Bag-of-Tasks Applications Data mining Massive search (as search for crypto keys) Parameter sweeps Monte Carlo simulations Fractals (such as Mandelbrot) Image manipulation (such as tomography) And many others…

OurGrid Components OurGrid: A peer-to-peer network that performs fair resource sharing among unknown peers MyGrid: A broker that schedules BoT applications SWAN: A sandbox that makes it safe running a computation for an unknown peer

OurGrid Architecture 1... n User Interface Application Scheduling User Interface Application Scheduling Site Manager Grid-wide Resourse Sharing Site Manager Grid-wide Resourse Sharing SWAN Sandboxing MyGrid SWAN

An Example: Factoring with MyGrid task: init: put./Fat.class $PLAYPEN remote: java Fat output-$TASK final: get $PLAYPEN/output-$TASK results task: init: put./Fat.class $PLAYPEN remote: java Fat output-$TASK final: get $PLAYPEN/output-$TASK results task: init: put./Fat.class $PLAYPEN remote: java Fat output-$TASK final: get $PLAYPEN/output-$TASK results....

MyGrid GUI

Network of Favors OurGrid forms a peer-to-peer community in which peers are free to join It’s important to encourage collaboration within OurGrid (i.e., resource sharing) −In file-sharing, most users freeride OurGrid uses the Network of Favor −All peers maintain a local balance for all known peers −Peers with greater balances have priority −The emergent behavior of the system is that by donating more, you get more resources −No additional infrastructure is needed

A B C D E NoF at Work [1] ConsumerFavor ProviderFavorReport * * * = no idle resources now broker B60 D45

NoF at Work [2] A B C D E B60 D45 E 0 ConsumerQuery ProviderWorkRequest * * = no idle resources now * broker

Free-rider Consumption Epsilon is the fraction of resources consumed by free-riders

Equity Among Collaborators

Scheduling with No Information Grid scheduling typically depends on information about the grid (e.g. machine speed and load) and the application (e.g. task size) However, getting good information is hard Can we schedule without information and deploy the system now? Work-queue with Replication −Tasks are sent to idle processors −When there are no more tasks, running tasks are replicated on idle processors −The first replica to finish is the official execution −Other replicas are cancelled

Work-queue with Replication 8000 experiments Experiments varied in −grid heterogeneity −application heterogeneity −application granularity Performance summary:

WQR Overhead Obviously, the drawback in WQR is cycles wasted by the cancelled replicas Wasted cycles:

Data Aware Scheduling WQR achieves good performance for CPU- intensive BoT applications However, many important BoT applications are data-intensive These applications frequently reuse data −During the same execution −Between two successive executions Storage Affinity uses replication and just a bit of static information to achieve good scheduling for data intensive applications

Storage Affinity Results 3000 experiments Experiments varied in −grid heterogeneity −application heterogeneity −application granularity Performance summary: Storage AffinityX-SuffrageWQR Average (seconds) Standard Deviation

SWAN: OurGrid Security Bag-of-Tasks applications only communicate to receive input and return the output −This is done by OurGrid itself The remote task runs inside a Xen virtual machine, with no network access, and disk access only to a designated partition

SWAN Architecture Guest OS Grid OS Grid Middleware Grid Application Guest OS Grid OS Grid Middleware Grid Application Guest OS Grid OS Grid Middleware Grid Application

Making it Work for Real...

OurGrid Status OurGrid free-to-join community is in production since December 2004 OurGrid is open source (GPL) and is available at −We’ve had external contributions OurGrid latest version is 3.1 −It contains the 10th version of MyGrid −The Network of Favors is available since version 3.0 −SWAN has been made available with version 3.1 −We’ve had around 180 downloads

HIV research with OurGrid B,c,FB,c,F HIV-2 HIV-1 M O ABCDFGHJKABCDFGHJK N ? prevalent in Europe and Americas prevalent in Africa majority in the world 18% in Brazil

HIV protease + Ritonavir Subtype B RMSD Subtype F

Performance Results for the HIV Application 55 machines in 6 administrative domains in the US and Brazil Task = 3.3 MB input, 1 MB output, 4 to 33 minutes of dedicated execution Ran 60 tasks in 38 minutes Speed-up is 29.2 for 55 machines −Considering an 18.5-minute average machine

Conclusions We have an free-to-join grid solution for Bag- of-Tasks applications working today Real users provide invaluable feedback for systems research Delivering results to real users is really cool! :-)

Questions?

Thank you! Merci! Danke! Grazie! Gracias! Obrigado! More at