Presentation is loading. Please wait.

Presentation is loading. Please wait.

Beauty and the Beast RITA meets Beowulf

Similar presentations


Presentation on theme: "Beauty and the Beast RITA meets Beowulf"— Presentation transcript:

1 Beauty and the Beast RITA meets Beowulf
Richard Wallace MITRE Presentation 6/2/04

2 Our Agenda today We will cover three big technology areas :
A brief history/overview of Beowulf clusters, the problems it was designed to solve, the current research and commercial products A brief introduction to RITA, what it is, how it works, and how it is applied to Beowulf, especially Grid Computing We are time-limited today. We will have ample time at the end of the presentation for questions 6/2/04

3 Beowulf

4 Beowulf… Originally developed by Donald Becker* at NASA. A Beowulf cluster is a grouping of, usually, identical cheap PC computers. They are networked into a small TCP/IP LAN, with libraries and programs installed allowing processing to be duplicated across multiple CPUs. See Beowulf.org Becker is now the Chief Technology Officer (CTO) of Scyld Computer Corporation of Annapolis, Maryland, a major developer and supplier of Beoulf Clusters. Donald Becker is known throughout the international community of operating system developers for his contributions to networking software. In addition, Don was a co-founder of the original Beowulf project, which is the cornerstone for commodity-based high-performance cluster computing. Don's work in parallel and distributed computing began in 1983 at MIT's Real Time Systems group. After MIT, Don was on the researcher at the Institute for Defense Analyses Supercomputing Research Center, working on parallel compilers, specialized computational techniques, and various networking projects. Subsequently, he started the Beowulf Parallel Workstation project at NASA's Goddard Space Flight Center. While at NASA, he lead the technical development of the Beowulf project and made significant contributions to the Linux kernel, most visibly in providing very broad support for networking devices. Don is a co-author of How To Build a Beowulf: A guide to the Implementation and Application of PC Clusters and a co-editor of the Extreme.Linux CD-ROM, the first packaged Beowulf software distribution. With colleagues from the California Institute of Technology (Cal Tech) and the Los Alamos National Laboratory, Becker was the recipient of the IEEE Computer Society 1997 Gordon Bell Prize for Price/Performance. In 1999 Becker received the Dr. Dobb's Excellence in Programming Award, which is presented annually to individuals who, "in the spirit of innovation and cooperation, have made significant contributions to the advancement of software development." *Since 1998 at Scyld in Annapolis, MD 6/2/04

5 Beowulf name and meaning
Besides being a “cool” name, it does have bearing… An epic poem from circa 10th century Anglo-Saxon Europe. The story traces the life of a heroic king of the Geats called Beowulf, and his great battles with the troll-like monster Grendel, then Grendel's mother, and a fire-breathing dragon, which costs Beowulf his life. It is fundamentally a depiction of a pre-Christian warrior society, in which the relationship between the leader, or king, and his thanes is of paramount importance. This relationship is defined in terms of provision and service. This society also had strongly defined terms of kinship. 6/2/04

6 What makes a Beowulf Cluster?
[Brown1] The nodes are dedicated to the Beowulf and serve no other purpose. The network, or networks, on which the nodes reside are dedicated to the Beowulf and serve no other purpose. The nodes are mass-market COTS computers. An essential part of the Beowulf definition (that distinguishes it from, for example a vendor-produced massively parallel processor - MPP - system) is that its compute nodes are mass produced commodities, readily available ``off the shelf'', and hence relatively inexpensive. The network is also a mass-market COTS entity (if not actually ``mass market'' - some Beowulf networks are sold pretty much only to Beowulf builders), at least to the extent that it must integrate with mass-market COTS computers and hence must interconnect through a standard (e.g. PCI) bus. Again, this is primarily to differentiate it from vendor-produced MPP systems where the network and CPUs are custom-integrated at very high cost. The nodes all run open source software. The resulting cluster is used for High Performance Computing (HPC, also called “parallel supercomputing” and other names). 6/2/04

7 Problem Classes for a Beowulf
Superior use of a Beowulf cluster Autonomous Parallel Execution Particle Physics / Vector processing / Fluid Dynamics, et. al. Coordinated Service Delivery (computational fan-out, fan-in) The classic “Bank Teller Problem” Inferior use of a Beowulf cluster Processes sharing common memory (Tight Parallel Execution) Serial processing and multiple disparate programs running on the nodes in the system. Web service, SOA architectures (at least at the front end). The classic “Dining Philosophers Problem” 6/2/04

8 Beowulf Topology NOWs (network of workstations) or COWs (cluster of workstations) or POPs (Pile of PC's) True Beowulf [Brown1] 6/2/04

9 Commercial Vendors Accelerated Servers Penguin Computing Aspen Systems
PSSC Labs Custom Fit, Inc. Atipa Turbotek Microway Linux Labs Open Clustering UK Scyld Software Corporation Plus 115 non-commercial projects! 6/2/04

10 Pictures of some real Beowulfs…
Physics Department, Drexel University, Philadelphia, PA 6/2/04

11 294 Node Los Alamos National Laboratory System
The Space Simulator is a 294-processor Beowulf cluster. It is based on the Shuttle XPC SS51G mini chassis, which uses a heat pipe instead of a CPU fan. The small size of the XPC cases allowed us to fit the cluster in about half the space of the previous 144-processor Avalon cluster. Each node consists of a 2.53 GHz Pentium 4 processor, 1 Gb of 333 MHz DDR SDRAM, an 80 Gbyte Maxtor hard drive, and a 3Com 3C996B-T gigabit ethernet card. The cost of an individual node was less than $1000. The network switch is composed of a Foundry FastIron 1500 switch trunked to another FastIron 800 switch, which provides a total of 304 Gigabit Ethernet ports using the 16-port JetCore modules. The system was delivered in late September, It achieved Linpack performance of Gflops on 288 processors in October 2002, making it the 85th fastest computer in the world according to the TOP500 list. 6/2/04

12 RITA

13 RITA… Regulated Isomorphic Temporal Architecture –
I developed RITA for MCI as a telecommunication resource* grid control. It regulates isomorphic reconfiguration with regard to temporal changes in network topology and changes to client, server, or peer architecture changes. There is a patent pending on the technology. The RITA technology is refined from the original 1984 WPAFB AFWAL/AVSAIL Data-Driven Operating System work by Wallace, McDonald, and Hague. Isomorphic – Similarity of form between substances of unlike composition *Internal and edge router provisioning, computing element message-oriented middleware, 1-N 00 NPA translation, et. al. 6/2/04

14 RITA-Beowulf Integration
RITA was developed to solve distributed application issues with dissimilar computational elements having complicated event interactions based on temporal and conditional values Parallel applications for Beowulf systems do not have dissimilar system complexity, but do have temporal and computational complexities System testing in any parallel system is difficult. System testing for such systems can lead to exponential time for test case generation. [Butler1, Littlewood1] RITA controls canonical event transforms by formal mathematics and a novel condition-event matrix. This produces deterministic execution and few, if any, event interaction errors RITA is designed for element level to system-of-systems level interaction 6/2/04

15 RITA Basics Event Transform
For all events, given event E, condition C, evaluation matrices, guard G, and action A, this figure illustrates the general stimulus flow model. The model has as input one to N inputs to a precondition-event matrix (see Equation 1) that causes an Action. The result of this action is input to a postcondition-event matrix (see Equation 1) that may chain to a precondition-event matrix as the output of the postcondition-event matrix may be zero to N events. Events leave the general stimulus flow model when they are fully consumed. At the point of consumption, an event becomes a datum germane to the application controlled by events. Each condition–event matrix has one to many resultants. The number of resultants is determined by the number of condition vectors. Resultants are allowed to be evaluated by conjunction. This conjunction allows consolidation of many input events to one event. For each of the canonical event forms a specific precondition and postcondition equation is used to evaluate events in fidelity to their canonical form. Canonical Events 6/2/04

16 RITA Elements Condition-Event Matrix Guard Elements Condition Vector Resultant Conjunction Realization of Conditions and Operands Each event matrix M is a matrix of event conditions and operations that are applied to those conditions. Each vector V of conditions is gated by a guard G such that the vector is evaluated if and only if the guard evaluates to True. Each evaluated vector has a resultant R that can be further reduced by conjunction to True. The guard is evaluated first and if True then the vector is evaluated. Optionally the series of resultants are evaluated. If G, V, and optionally R, evaluate to True, the action is initiated. The matrix can be sparse or dense depending on the condition interactions in the event matrix. Conditions can be compounded by applying a Boolean operation matrix comprehended across the event matrix. Evaluation is by vector with the result being true or false. 6/2/04

17 Event Operations 6/2/04

18 Event Operations (Cont.)
6/2/04

19 Event Operations (Cont.)
6/2/04

20 RITA Screen shots The Event Manager main window shows the user interface after start-up for a new condition-event matrix. The user checks the "New" box and only the Matrix Name and System Assignment name fields are active. All other controls for the Matrix tab control are inactive. The user can choose between saving this assignment or canceling with the "Save" or "Cancel" buttons respectively. The event engine executable is the name assigned to the system; in Figure 5 the name of the event engine is called "Test_System" and "Test Matrix" is the name of the condition-event matrix. All names are case sensitive. The matrix name and vector variable are used to create the UUID name for events. This name is provides the name‑space that is used to route events from publishers to subscribers. Event Manager Main Window 6/2/04

21 RITA Screen Shots (Cont.)
Extremely simplified example! This shows the same control information as does Figure 6 with the addition of the condition logic (expressed in the ANSI C language). The following constructs are allowed: IF-THEN IF-THEN-ELSE CASE-SELECT (i.e. switch). All switch statements must be inclusive of all values passed by the Guard condition. The relational operators less than (<), greater than (>), equal(==), not equal (!=), less than or equal to (<=), and greater than or equal to (>=) Computational (i.e. stack) variables are allowed for addition (+), sign/subtraction (-), multiplication (*), or division (/) No pointers or locally declared variables are allowed. Only Boolean values are allowed as return values. All return values must be of the form <condition_name>_TRUE or <condition_name>_FALSE for conditional computations. Vector variable data that represents time values relies on three special time functions that ensure that the condition-event matrix does not have to rely on user manipulated time functions. The evaluate_time() function can be used in the Guard or Condition code. The create_time(), and get_time() functions can only be used in code external to the event engine. The event engine system time is constant throughout evaluation of the guard and its condition vector. Condition Vector Tab with Condition Code 6/2/04

22 RITA Screen Shots (Cont.)
This shows the logic for the vector Guard creation. A guard may be a condition previously designed and entered into the user interface. The name of the condition is the value used to identify it. In Figure 11, the guard is uniquely named for clarity of the example. Note that the system will check that the guard variable matches the vector variable to ensure consistency. The new fields shown are: ·         Guard name and event type (i.e. "InitialCheck1G" and "TRANSITIONAL") The choice of SPIKE, SET-AT, or TRANSITIONAL event type is done for the guard to allow state memory of the prior event. The state memory of a guard is an internal value, unpublished and unavailable to application programs. The internal state of a guard, if violated invalidates the guard resulting in a FALSE for the vector. This ensures intended event form integrity. Guard Tab 6/2/04

23 RITA Usage An example of the event shorthand being used to layout a non terminating event system. Stimulus starts in system 1. The following slides show the data entry for the generation of the condition event matrix The example event system has three event systems. The design has analysis done for correct matrices per system and detection of any race or deadlock conditions. The following tables show the analysis leading to direct data entry in the Event Manager. The event manager would generate a UUID for each of the system plus matrix plus variable names. The names are kept as mnemonics here for clarity and tracking to Figure 19. From the design, the seminal event system has to be identified by the designer as the event system has to have some initial state prior to entering its steady-state. In this organization of systems, system one is the seminal system. Event systems can have design errors such that an event sequence is deadlocked (either at initiation or from event interaction), exponential (commonly known as event storms), endless (event consumption error) or quiescent (event starvation) – the latter two may be correct given the parameters of the event system organization. In this organization of systems after an initialization which results in system three, event one being delivered to system one, the system is endless – which is the design intent here. 6/2/04

24 RITA Usage (Cont.) 6/2/04

25 RITA Usage (Cont.) 6/2/04

26 RITA Usage (Cont.) 6/2/04

27 RITA Grid-enables Beowulf
RITA is not PVM nor MPI. It would use such primitives local to a Beowulf cluster system. RITA leverages well established message-oriented middleware to control communication between systems thus providing an architecture enabling a Beowulf to participate in a Grid RITA extends the Beowulf cluster into a Grid system by controlling its temporal and computational behavior allowing heterogeneous system-of-systems to be created MoM for WAN, Federated systems, Command and Control, Telecommunication network control, and the ilk. Middleware – IBM MQ, BEAMessageQ, MicrosoftMQ, TIBCO Rendezvous, JMS (Fiorano, IBM MQ). Guarenteed Delivery, queuing, message fan-out, fan-in, etc. MPI – Message Passing Interface from Argonne National Laboratory PVM – Parallel Virtual Machine from Oak Ridge National Laboratory (more or less superceded by MPI) 6/2/04

28 Grid Computing

29 Grid Computing Time Line
6/2/04

30 Where Grids are going... Too many projects to talk about here. See the Backup Slides section “Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of geographically distributed "autonomous" resources dynamically at runtime depending on their availability, capability, performance, cost, and users' quality-of-service requirements.” Dr. Rajkumar Buyya, The University of Melbourne, Australia “The term 'Grid' is chosen to suggest the idea of a 'power grid': namely that application scientists can plug into the computing infrastructure like plugging into an electrical power grid. It is important to note, however, that the term 'Grid' is sometimes used synonymously with a networked, high performance-computing infrastructure. Obviously this aspect is an important enabling technology for future applications, but in reality it is only part of a much larger scenario that also includes information handling and support for knowledge within the scientific process. It is this broader view of the infrastructure that is now being referred to as the Semantic Grid. The Semantic Grid is characterized by an open system, with a high degree of automation, which supports flexible collaboration and computation on a global scale.” IEEE Distributed Systems On-line 6/2/04

31 Questions?

32 Backup Slides

33 American Grid technology
American Projects Apples. Project Status: Research. Application-Level Scheduling - this is an application-specific approach to scheduling individual parallel applications on production heterogeneous systems. Bond. Project Status: RES. The Bond system consists of: the Bond shell able to execute a set of Bond commands, a resource database, a knowlege processing system, and a set of utilities. Bricks. Project Status: RES. Bricks is a performance evaluation system that allows analysis and comparison of various scheduling schemes on a typical high-performance global computing setting. DOCT. Project Status: GOV. The Distributed Object Computation Testbed (DOCT) is an environment for handling complex documents on geographically distributed data archives and computing platforms. Entropia.com. Project Status: COM. Desktop software that should provide universal and pervasive source of computing power via the Internet. Project Status: RES. The chemistry department of Stanford University is involved in understanding how proteins self-assemble. The group has developed a new method of protein simulation to address this grand challenge problem. In order to perform protein simulation many processors are required. This project distributes the processing demands across 'volunteer' Internet clients. GLOBUS. Project Status: RES. This project is developing basic software infrastructure for computations that integrate geographically distributed computational and information resources. 6/2/04

34 American Grid technology (Cont.)
GLUE. The Grid Laboratory Uniform Environment (GLUE) effort is sponsored by the High Energy and Nuclear Physics Intergrid Joint Technical and Coordination Boards. It aims to sponsor and enable interoperability between the EU physics grid project efforts (EDG, DataTag, etc.) and the US physics grid project efforts (iVDGL, PPDG, GriPhyN). GLUE addresses each service required by an end-to-end application or experiment system running over the grid, starting from the lowest level functionality for which interoperability between international projects is needed. Grid Resource Broker (GRB). Project Status: RES. Country: Italy and USA. A grid portal that allows trusted users to create and handle computational grids on the fly exploiting a simple and friendly gui. HARNESS. Project Status: RES. Harness builds on the concept of the virtual machine and explores dynamic capabilities beyond what PVM can supply. It focused on developing three key capabilities: Parallel plug-ins, Peer-to-peer distributed control, and multiple virtual machines. HTC (Condor). Project Status: RES. The Condor project aims is to develop and deploy, and evaluate mechanisms and policies that support high throughput computing (HTC) on large collections of distributed computing resources. InfoSpheres. Project Status: RES. The Caltech Infospheres Project researches compositional systems, which are systems built from interacting components JAM (for Job & Application Manager) is a proof-of-concept, Jini(TM) technology based, graphical interface to Grid Engine. It provides a framework for selecting from a set of registered applications (or specifying one yourself directly), and selecting a Grid Engine queue to which the job is submitted (from a set of available queues originating from one or more clusters or cells), with support for filtering on various queue attributes. Once the job is submitted, it can be monitored and controlled via JAM. Javelin. Project Status: RES. Javelin: Internet-Based Parallel Computing Using Java 6/2/04

35 American Grid technology (Cont.)
LEGION. Project Status: RES. Legion is an object-based metasystem. Legion supports transparent scheduling, data management, fault tolerance, site autonomy, and a wide range of security options. NASA IPG. Project Status: RES. The Information Power Grid is a testbed that provides access to a grid – a widely distributed network of high performance computers, stored data, instruments, and collaboration environments. NETSOLVE. Project Status: RES. NetSolve is a project that aims to bring together disparate computational resources connected by computer networks. It is a RPC based client/agent/server system that allows one to remotely access both hardware and software components NSF Middleware Initiative. Project Status: RES Featuring the GRIDS Center Software Suite and NMI-EDIT Components. It integrates key software packages, standards, and best practices for science, engineering, and education. PARDIS. Project Status: RES. PARDIS is an environment providing support for building PARallel DIStributed applications. It employs the Common Object Request Broker Architecture (CORBA) to implement application-level interaction of heterogeneous parallel components in a distributed environment. PUNCH. Project Status: RES. A platform for Internet computing that turns the World Wide Web into a distributed computing portal. Users can access and run programs via standard Web browsers. Applications can be installed "as is" in as little as thirty minutes. Machines, data, applications, and other computing services can be located at different sites and managed by different entities. Jobs can be automatically routed to Condor, DQS, Globus, Grid Engine, or PBS. PUNCH provides a network operating system, logical user ccounts, a virtual file system service that can access remote data on demand, and an active yellow pages service that can manage resources spread across administrative domains. Together, these capabilities allow PUNCH to manage and broker resources among end users, application service providers, storage warehouses, and CPU farms. PUNCH as been operational for five years, and currently powers three portals and serves 70 engineering applications to about 2,000 users across two dozen countries. WebFlow. Project Status: RES. WebFlow can be regarded as a high level, visual user interface and job broker for Globus. WebSubmit. Project Status: RES. A Web-based Interface to High-Performance Computing Resources. 6/2/04

36 Asia-Pacific Grid technology
Asia-Pacific Projects cJVM. Project Status: Research. Country: Israel. cJVM is a Java Virtual Machine (JVM) which provides a single system image of a traditional JVM while executing in a distributed fashion on the nodes of a cluster. Molecular Modelling for Drug Design on Peer-to-Peer Grid. Project Status: RES. Country: AU. DISCWorld. Project Status: RES. Country AU. An infrastructure for service-based metacomputing across LAN and WAN clusters. It allows remote users to login to this environment over the Web and request access to data, and also to invoke services or operations on the available data. Gridbus Project. Project Status: RES. Country AU. The key objective of the Gridbus project is to develop fundamental,next-generation cluster and grid technologies that support a true utility-driven service-oriented computing. GridScape: A tool from the GridBus project for the creation of interactive and dynamic Grid testbed web portals. Features include rapid creation of Grid testbed portals, simple portal management and administration, clear and user-friendly overall view of Grid testbed resources. HEPGrid. Project Status: RES. Country: AU. This project aims to design and develop a safe, secure, dynamic and adaptive Data Grid Resource Broker for managing and scheduling data intensive science applications on geographically distributed resources. Nimrod/G & GRACE. Project Status: RES. Country: AU. A global scheduler (resource broker) for parametric computing over a enterprise wide clusters or computational grids. NINF. Project Status: RES. Country: JP. Ninf allows users to access computational resources including hardware, software and scientific data distributed across a wide area network with an easy-to-use interface. 6/2/04

37 European Grid technology
European Projects CERN. AVO. Project Status: RES. Country: EU The Astrophysical Virtual Observatory (EVO) will combine astronomical databases and processing capabilities in a virtual observatory. CERN Data Grid. Project Status: RES. Country: EU. This project aims to develop middleware and tools necessary for the data-intensive applications of high-energy physics. Covise. Project Status: COM. Country: DE. COVISE - Collaborative, Visualization and Simulation Environment. CrossGrid. Project Status:RES. Country: EU Developing techniques for large-scale grid-enabled real-time simulations and visualisations that require responses in real-time. Addresses issues such as the distribution of source data, simulation and visualisation, virtual time management, interactive simulation and visualisation rollback and platform-independent virtual reality. DAMIEN. Project Status: RES. Country: DE. Distributed Application and Middleware for Industrial Use of European Networks, focusing on providing a framework of tools for Grid-Applications. DAS. Project Status: RES. Country: NL. This is a wide-area distributed cluster, used for research on parallel and distributed computing by five Dutch universities. 6/2/04

38 European Grid technology (Cont.)
DataGrid. Projects Status:RES. Country:EU The objective of DataGrid is to enable next generation scientific exploration which requires intensive computation and the analysis of shared, large-scale databases. DataTAG. Project Status:RES. Country: EU The objective of DATATAG is to implement a network infrastructure for high-speed interconnection between individual GRID domains in Europe and the US. EGSO. Project Status:RES. Country: EU The European Grid of Solar Observations (EGSO) aims to address the problem of combining heterogeneous data from scattered archives of space- and ground-based observations into a single "virtual" dataset. A new, unified solar feature catalogue will allow the user to search for observations on the basis of events and phenomena, rather than just time and location. EROPPA. Project Status: COM. Country: EU. Software to design, implement, and experiments with remote/distributed access to 3D graphic applications on high-performance computers for the use of post-production SMEs. e-Science Environment for the CLRC applies a range of techniques within the UK CLRC to develop an Integrated e-Science Environment which will also be deployed to support grant-funded collaborations. The project currently has three streams: i. HPC Grid Services Portal; ii. Data Portal; iii. Advanced Visualisation Tools. The UK e-Science Community InfoPortal is designed to increase the effectiveness of users of HPC resources on the Grid. e-Science OGSA Testbed. Project Status: RES. Country UK. A one-year project funded by EPSRC via the e-Science programme to test and evaluate the first implementation of the OGSA core by deploying the GT3 toolkit in a service-based Grid testbed spanning organisational boundaries with differing institutional services, security policies and firewalls. eSecurity Centre. Project Status: RES. Country UK. A joint venture between by the Universities of Manchester and Salford, the eSecuroty Centre provides a point of reference for the pooling their combined Grid security knowledge. 6/2/04

39 European Grid technology (Cont.)
EuroGrid. Project Status:COM. Country EU. Application TEstbed for European Grid Computing. The EUROGRID project will develop core Grid software components and integrate them into an environment providing fast file transfer, resource brokerage, interfaces for coupled applications and interactive access. Globe. Project Status: RES. Country: EU. Globe is a research project aiming to study and implement a powerful unifying paradigm for the construction of large-scale wide area distributed systems: distributed shared objects. GLUE. The Grid Laboratory Uniform Environment (GLUE) effort is sponsored by the High Energy and Nuclear Physics Intergrid Joint Technical and Coordination Boards. It aims to sponsor and enable interoperability between the EU physics grid project efforts (EDG, DataTag, etc.) and the US physics grid project efforts (iVDGL, PPDG, GriPhyN). GLUE addresses each service required by an end-to-end application or experiment system running over the grid, starting from the lowest level functionality for which interoperability between international projects is needed. GRIDs for Complex Problem Solving , Information Society Technologies (IST). Objectives: 1. To expand the potential of the Grid and peer-to-peer approaches to solving complex problems which can not be solved with current technologies in application fields such as, but not limited to, industrial design, engineering and manufacturing, health, genomics and drug design, environment, critical infrastructures, energy, business and finance, and new media. 2. To overcome present architectural and design limitations hampering the use and wider deployment of computing and knowledge Grids and to enrich its capabilities by including new functionalities required for complex problem solving. This should help the larger uptake of Grid type architectures and extend the concept from computation Grids to knowledge Grids, eventually leading to a “semantic Grid”. GRIA. Project Status:RES. Country: EU Grid for Business and Industry will devise business models and processes that make it feasible and cost-effective to offer and use computational services securely in an open GRID marketplace. Grid Computing Technology Infoware. Project Status: RES. Country: Intl. GCTI aims to contribute to the development and advancement of technologies that enable us to access computing power and resources with the ease similar to electrical power. 6/2/04

40 European Grid technology (Cont.)
GridLab. Project Status: RES. Country: Poland. Aims to build components for Grid applications (as MatLab does for mathematics), and realistic testbeds for their development. GRIDSTART. Project Status: RES. Country: EU Information Society Technologies. Ten EU-funded projects have been clustered with the intention to stimulate the wide deployment of appropriate technology and to support the early adoption of best practice. This will be achieved by raising the awareness of potential users of the solutions developed, by connecting technology suppliers with those who will deploy it and by fully identifying and exploiting synergies within the cluster. The projects involved in developing the Grid infrastructure are: AVO, CrossGrid, DAMIEN, DataGrid, DataTAG, EGSO, EuroGrid, GRIA, GridLab, GRIP. Grid Interoperability Project (GRIP). Project Status: RES. Country: EUR. A 2-year project funded by the European Union to realise the interoperability of Globus and UNICORE and to work towards standards for interoperability in the Global Grid Forum. The GRIP project will demonstrate interoperability of jobs launched via UNICORE and running on resources controlled purely by Globus. Demonstrations will be given of a portal for biomolecular applications built on UNICORE's intuitive job preparation client and of a relocatable weather forecasting model that can access data from large scale weather prediction models run at a large supercomputing and tailor them to local condition by running on local resources with results displayed to a client's workstation. Grid Resource Broker (GRB). Project Status: RES. Country: Italy and USA. A grid portal that allows trusted users to create and handle computational grids on the fly exploiting a simple and friendly gui. JaCo3. Project Status: RES. Country: EU. Java and CORBA based Collaborative Environment for Coupled Simulations. The project aims to assess the effectiveness of a collaborative working environment, in the scientific simulation design process, by coupling existing codes and visualisation systems. JaWs. Project Status: RES. Country: GR. JaWS is an economy-based computing model where both resource owners and programs using these resources place bids to a central marketplace that generates leases of use. 6/2/04

41 European Grid technology (Cont.)
LHC Computing Grid Project. Project: RES. Country EU. The goal of the LCG project is to meet the needs of the Large Hadron Collider (LHC), that is being constructed at CERN in Switzerland, by deploying a worldwide computational grid service, integrating the capacity of scientific computing centres spread across Europe, America and Asia into a virtual computing organisation. MAMMO Grid, European Federated Mammogram Database Implemented on a GRID Structure. The aim of this project is to, develop a European-wide database of mammograms that will be used to investigate a set of important healthcare applications as well as the potential of this Grid to support effective co-working between healthcare professionals throughout the EU. MetaMPI. Project Status: RES. Country: DE. MetaMPI supports the coupling of heterogeneous MPI systems, thus allowing parallel applications developed using MPI to be run on grids without alteration. METODIS. Project Status: COM. Country: DE. Metacomputing Tools for Distributed Systems - A metacomputing MPI library implemented both on TCP/IP and on ATM will serve as the application programming model. This project has now finished. The DAMIEN project is METODIS' successor. MOL. Project Status: RES. Country: DE. Metacomputer OnLine is a toolbox for the coordinated use of WAN/LAN connected systems. MOL aims at utilizing multiple WAN-connected high performance systems for solving large-scale problems that are intractable on a single supercomputer. PACX-MPI. Project Status: RES. Country: DE. .MPI-implementation to seamlessly run a MPI-application on a Computational Grid. Poznan Metacomputing. Project Status: RES. Country: PL. Poznan Centre works on development of tools and methods for metacomputing. 6/2/04

42 European Grid technology (Cont.)
The Semantic Grid. Project Status: RES. Country: UK. As the Semantic Web is to the Web, so is the Semantic Grid to the Grid. UK OGSA Evaluation Project. Status: RES. Country UK. The overall aim of the project is to evaluate the strengths and weaknesses of OGSA by developing an experimental OGSA-based grid across organizational boundaries. WAMM. Project Status: RES. Country: IT. WAMM (Wide Area Metacomputer Manager) is a graphical tool, built on top of PVM. It provides user with a graphical interface to assist in repetitive and tedious tasks such as: host add/check/removal, process management, compilation on remote hosts, remote commands execution. UNICORE. Project Status: RES. Country: DE. The UNiform Interface to Computer Resources aims to deliver software that allows users to submit jobs to remote high performance computing resources. XtremWeb. Project Status: RES. Country FR. XtremWeb is an academic, non profit, multidisciplinary platform of Global Computing, designed to serve as a substrate for large scale experiments. XtremWeb uses remote PCs connected to the Internet. Participants cooperate by providing their CPU idle time for outstanding research projects or large contests. The first project using this platform can be found at Other projects are under installation at Toronto and Huwan (China). 6/2/04

43 References [Brown1] Engineering a Beowulf-style Compute Cluster, 4/3/2003, Robert G. Brown, Duke University Physics Department [Miles1] High Throughput Computing in the Beowulf Environment, 8/11/2003, Arnie Miles, Georgetown University Advanced Research Computing [Butler1] Butler, R.W. and Finelli, G.B. The infeasibility of quantifying the reliability of life-critical real-time software. IEEE Transactions on Software Engineering, (19): 3-12, January, 1993. [Littlewood1] Littlewood B. and Strigini L. Validation of Ultrahigh Dependability for Software-based Systems. Com. ACM, 11(36):69-80, November 1993. 6/2/04


Download ppt "Beauty and the Beast RITA meets Beowulf"

Similar presentations


Ads by Google