Presentation is loading. Please wait.

Presentation is loading. Please wait.

COMP3019 Coursework: Introduction to M-grid Steve Crouch School of Electronics and Computer Science.

Similar presentations


Presentation on theme: "COMP3019 Coursework: Introduction to M-grid Steve Crouch School of Electronics and Computer Science."— Presentation transcript:

1 COMP3019 Coursework: Introduction to M-grid Steve Crouch s.crouch@software.ac.uk, stc@ecs School of Electronics and Computer Science

2 Objectives  To equip students to drive a lightweight grid implementation to solve a problem that can benefit from using grid technology.  To develop an understanding of the basic mechanisms used to solve such problems.  To develop a general architectural and operational understanding of typical production-level grid software.  To develop the programming skills required to drive typical services on a production- level grid.

3 Overview  Part 1: m-grid – m-grid: lightweight software illustrating grid concepts in use – Develop a program with m-grid’s Java API to solve a simple problem, submit it to m-grid with input data, collect results  Part 2: Google MapReduce & GridSAM – MapReduce: framework for distributed processing of large datasets using many computers – GridSAM: job submission web service interface to a computational resource (e.g. compute cluster, single machine) – Extend code stubs to submit jobs to GridSAM and monitor them to completion – Extend pseudocode that implements a basic MapReduce framework

4 Where to get stuff/help?  Can obtain coursework materials from website – Ready for Wednesday  Software documentation  Coursework help lecture 19 th March  Myself: s.crouch@software.ac.uks.crouch@software.ac.uk  Building 32: Level 4 lab 4067 Bay 23

5 Background

6 The Problem  Basically, want to run compute-intensive task  Don’t have enough resources to run job locally – At least, to return results within sensible timeframe  Would like to use another, more capable resource

7 Distributed Computing in Olden Times Small number of ‘fast’ computers – Very expensive – Centralised – Used nearly all the time – Time allocations for users – Not updated often brewbooks Cray X-MP (Cray -1 successor) Univac 1710 Michael L. Umbricht and Carl R. Friend Punched cards Wait time huge MailNet, SneakerNet, TyperNet, etc… Mainframes Cray-1 1976 - $8.8 million, 160 megaflops, 8MB memory

8 The Present…  Now… large number of slow computers: – Cheap – Distributed  Computation  Ownership – Not used all the time – Exclusive access to users – Updated often – e.g. desktop computers, PDAs, mobile phones  Low utilisation of computing power  e.g.: institutional/university resources…

9 It’s About Scaling Up… Compute and data – you need more, you go somewhere else to get it Local Institutional National International Then… the march towards localisation of computation, the Personal Computer Computational Science develops in laboratories Is this changing again? Images: nasaimages, Extra Ketchup, Google Maps, Dave Page

10 The Grid - a Reminder  The grid – many definitions! “Grid computing offers a model for solving massive computational problems by making use of the unused CPU cycles of large numbers of disparate, often desktop, computers treated as a virtual cluster embedded in a distributed telecommunications infrastructure” – Wikipedia “A service for sharing computer power and data storage capacity over the Internet.“ – CERN (European Organisation for Nuclear Research)  Two components of grid computing: – Computational/data resource – e.g. computational cluster, supercomputer, desktop machine – Infrastructure for externalising that resource to others

11 Some Examples…  Grid (i.e. internet-accessible) examples: – SETI@Home - http://setiathome.ssl.berkeley.edu/  Process data from Arecibo Radio Telescope, Puerto Rico  2 million volunteers installed software – Univa.org- http://www.univa.org/  Projects such as Cancer Research, Smallpox  2.5 million volunteer systems  Sells processing time to organisations  Computational resource (i.e. intranet- accessible): – Cluster managers, supercomputer, single machine

12 The Idea - as a Provider…  Goal: I want others to access my resources & applications  I want to provide secure controlled access to: – My applications:  Specify who can access which applications – My computational or data resources:  I can limit external usage of my resources  Provides an interface that allows remote users to access my resources  Enable collaboration with other partners

13 The Idea - as a User (or Client)  Goal: I want to use other resources & applications  Through a network of service providers I can…: – Gain access to applications that I do not have installed locally – Use remote machines [larger resource] with more CPU, memory or storage  Process larger problem sizes – Transparently switch between different service providers  No exposure to underlying OS, queuing policy, disk layout etc.

14 Grid Architecture  The best way of designing grids… – Loosely coupled services – Message based exchange  The best way of running grids… – Interoperability between versions & grids – Standards for infrastructure & services  The best way of building grids… – Leverage existing infrastructure & standards – Use Web Services…

15 Cluster Computing & the Grid Grid is predominantly built on Cluster Computing solutions University B University A University C Grid Cluster Computing

16 The General Idea…  Abstract ‘virtualisation’ of local network resources – Infrastructure manages many machines – Visualisation as a single resource – Submitted jobs get put on queue(s) CoordinatorExecutor Client … …

17 Condor – Background  Begun in 1988, based on Remote-Unix (RU) project  Predominantly makes use of idle cycles on machines

18 Condor Components  Four main machine ‘roles’ (daemons): – Submit Client (condor_schedd): used to submit resource requests, monitor, modify and delete jobs. – Central Manager, Server  condor_collector: collects information about pool resources.  condor_negotiator: negotiates (match-makes) between resources and resource requests. – Job Executor (condor_startd): executes jobs, advertises resources. Enforces local policy. – (Checkpoint Server (condor_ckpt_server): services requests to store and retrieve checkpoint files.)

19 Other Condor daemons…  condor_master: controls daemons running on a machine. Keeps daemons running depending on machine role.  condor_shadow: monitors running job on behalf of submit client, from submit client’s machine. i.e. one daemon for every client job.  condor_starter: creates environment, then runs and monitors job on executor. i.e. one daemon for every executor job.

20 Shared Disk Condor Architecture 1.Client submits job (executable + input data) to local queue 2.Client schedd advertises job request to server collector 3.Server negotiator gets next priority request from collector 4.Negotiator negotiates w/ client schedd to match resource/job 5.Client removes job from queue and sends it to executor 6.Job runs on executor 7.Job output results returned to client Client Server Executor Submit client (condor_schedd, condor_shadow…) Negotiator (condor_negotiator) Collector (condor_collector) Executor (condor_startd, condor_starter…) Queue … 2 3 4 6 5 … 1 7

21 M-grid An overview

22 Computational Grids - in General  Users supply tasks to be performed via client  Execution nodes contribute processing power  Coordinator node sends tasks to execution nodes, ensuring results returned  Existing grid tech. sophisticated -> significant complexity – To what extent can this be reduced? CoordinatorExecutor Client … …

23 Java Applets?  How about Java applets as a program unit? – Browsers could act as execution nodes  Security concerns? – Web browsers execute foreign code – Java applets executed within a ‘sandbox’ virtual machine – Stringent security restrictions imposed – In-built security configuration in browsers – Applet can only contact originating server  Risk significantly reduced

24 M-grid: A Lightweight Grid I  M-grid: – Execution node = Java-applet enabled browser – Client = browser – Coordinator = web server – Tasks distributed as Applets in web pages  Execution node browser opens web page on server  Runs task applet  Uploads results to server CoordinatorExecutor Client ……

25 M-grid: Overview  Implemented on: – Microsoft’s IIS (Internet Information Server) using ASP – Apache Tomcat – we’ll use this one!  Client – Develops applet class as extension to MGridApplet class – Can run applet locally in appletviewer for testing – Compiles and packages applet with input parameters file into a jar file – Submits jar to web server via JobSubmit web page – Eventually collects results via ViewJobs web page  Execution node – Requests a job via JobRequest page – Applet submits results from job using SubmitResults page  Security provided by session authentication

26 Architecture


Download ppt "COMP3019 Coursework: Introduction to M-grid Steve Crouch School of Electronics and Computer Science."

Similar presentations


Ads by Google