Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 NW-GRID Future Developments R.J. Allan CCLRC Daresbury Laboratory.

Slides:



Advertisements
Similar presentations
The Access Grid Ivan R. Judson 5/25/2004.
Advertisements

Fabric and Storage Management GridPP Fabric and Storage Management GridPP 24/24 May 2001.
The National Grid Service and OGSA-DAI Mike Mineter
Complementary Capability Computing on HPCx Dr Alan Gray.
John Kewley CCLRC Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Accessing the NW-GRID (from Linux) John Kewley Grid Technology Group E-Science.
1 Project overview Presented at the Euforia KoM January, 2008 Marcin Płóciennik, PSNC, Poland.
UK Campus Grid Special Interest Group Dr. David Wallom University of Oxford.
Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
The UK National Grid Service Using the NGS. Outline NGS Background Getting Certificates Acceptable usage policies Joining VO’s What resources will be.
Technology Steering Group January 31, 2007 Academic Affairs Technology Steering Group February 13, 2008.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Principles for Collaboration Systems Geoffrey Fox Community Grids Laboratory Indiana University Bloomington IN 47404
Microsoft Services Provider License Program
Introduction to the Virtual Desktop Pilot at Fermilab.
SUN HPC Consortium, Heidelberg 2004 Grid(Lab) Resource Management System (GRMS) and GridLab Services Krzysztof Kurowski Poznan Supercomputing and Networking.
Computer Science Perspective Ludek Matyska Faculty of Informatics, Masaryk University, Brno and also CESNET, Prague.
STRATEGIES INVOLVED IN REMOTE COMPUTATION
SensIT PI Meeting, January 15-17, Self-Organizing Sensor Networks: Efficient Distributed Mechanisms Alvin S. Lim Computer Science and Software Engineering.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Nimrod/G GRID Resource Broker and Computational Economy David Abramson, Rajkumar Buyya, Jon Giddy School of Computer Science and Software Engineering Monash.
An emerging computing paradigm where data and services reside in massively scalable data centers and can be ubiquitously accessed from any connected devices.
DISTRIBUTED COMPUTING
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Grid Basics Adarsh Patil
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
Cliff Addison University of Liverpool Campus Grids Workshop October 2007 Setting the scene Cliff Addison.
NGS Innovation Forum, Manchester4 th November 2008 Condor and the NGS John Kewley NGS Support Centre Manager.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 North West Grid Overview R.J. Allan CCLRC Daresbury Laboratory A world-class Grid.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Authors: Ronnie Julio Cole David
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Applications.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
VMware vSphere Configuration and Management v6
The National Grid Service Mike Mineter.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
GridNEWS: A distributed Grid platform for efficient storage, annotating, indexing and searching of large audiovisual news content Ioannis Konstantinou.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Alessandro Cardoso, Microsoft MVP Creating your own “Private Cloud” with Windows 10 Hyper- V WIN443.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Next Steps R.J. Allan CCLRC Daresbury Laboratory.
The National Grid Service Mike Mineter.
NeST: Network Storage John Bent, Venkateshwaran V Miron Livny, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau.
1 Porting applications to the NGS, using the P-GRADE portal and GEMLCA Peter Kacsuk MTA SZTAKI Hungarian Academy of Sciences Centre for.
+ Support multiple virtual environment for Grid computing Dr. Lizhe Wang.
Cloudsim: simulator for cloud computing infrastructure and modeling Presented By: SHILPA V PIUS 1.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
Advancing National Wireless Capability Date: March 22, 2016 Wireless Test Bed & Wireless National User Facility Paul Titus Department Manager, Communications.
XtreemOS IP project is funded by the European Commission under contract IST-FP Scientific coordinator Christine Morin, INRIA Presented by Ana.
GridOS: Operating System Services for Grid Architectures
Workload Management Workpackage
Clouds , Grids and Clusters
SuperComputing 2003 “The Great Academia / Industry Grid Debate” ?
Stephen Pickles Technical Director, GOSC
Fabric and Storage Management
Grid Computing.
Grid Portal Services IeSE (the Integrated e-Science Environment)
EIN 6133 Enterprise Engineering
GGF15 – Grids and Network Virtualization
The Anatomy and The Physiology of the Grid
Welcome to (HT)Condor Week #19 (year 34 of our project)
Presentation transcript:

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 NW-GRID Future Developments R.J. Allan CCLRC Daresbury Laboratory

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Grid Applications ● We expect a growing number of “hosted” applications ● academic and open source is not a problem ● commercial licensed software may be more difficult – we are discussing with vendors on a case-by-case basis ● Tell us what applications you need ● Can you provide any?

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Testbed Objectives ● The project is structured around three major testbed milestones: – Testbed 1 (2006): High-performance and parametric applications – we have been doing this as already presented – Testbed 2 (2007): Safe, bookable and accountable Grid – Testbed 3 (2008): Collaborative real-time Grid ● For both academic and commercial users

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Emerging Middleware To respond to the requirements of these testbeds and applications we will be developing or evaluating new middleware components ● T1: we have demonstrated RMCS and GROWL during this course and are evaluating AHE ● T2: we will be deploying a portal, similar to the NGS portal. We are also evaluating user and VO management, resource allocation and co-scheduling software, e.g. GOLD and HARC ● T3: we are looking for software for a collaborative real- time Grid, which will include visualisation, steering and job migration We will move forward with the latest middleware technology to remain compatible with NGS and Grid worldwide

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Hardware ● Some additions to the Grid infrastructure from partners have already been mentioned ● We have a refresh phase coming up and are gathering requirements ● NW-GRID will no longer be a homogeneous system, but will consist of the 4 core clusters plus special-purpose facilities accessible to all users

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 A Unique North-West Project: advancing Grid Technologies and Applications Top end: HPCx and CSAR Hooks to other Grid consortia: NGS, WRG Applications and industry Sensor networks and experimental facilities Technology “tuned to the needs of practicing scientists”. Pharma, meds, bio, social, env, CCPs User interfaces Desktop pools: Condor etc. Portals, client toolkits, active overlays Advanced Network technology Mid range: NW- GRID and local clusters

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Range of Resource Types ● National Class Supercomputers (CSAR and HPCx) ● Advanced architecture development systems (BlueGene-L) ● Commodity Clusters (NW-GRID core, POL, etc.) ● Mid range systems (SGIs, e.g. for visualisation) ● Campus Grid pools (Condor - DL internal, Liverpool) ● Desktop client systems – Windows, Linux, Mac ● Philanthropic computing (e.g. ● Mobile devices –Nokia770, PDAs ● Sensor Grid networks – Lancaster This brings interesting challenges!

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Some Challenges ● New scalable algorithms – should include steering and checkpointing capabilities – make use of intergrid networks with high latencies (like our fibre network) – scale to 1000s of processors – BlueGene may be upgraded to ● Resource brokering and matching requirements to appropriate platforms – efficient use of resources – scheduling requirements – high priority, backfill, etc. ● Lightweight Grid computing – improve client access, static and mobile ● Dynamic Grid configurations and VOs ● Inter-working Grids, e.g. GridPP, CampusGrid

Rob Allan Daresbury Laboratory NW-GRID Training Event 26 th January 2007 Collector C A Collector C B Collector C C Data server S A Data server S B Registry Application Out of range? Mobile smart devices register with multiple collectors to enable localisation Fixed collectors: power and network requirements? Replica management Publish Search/ query Subscribe/ access Distributed but fixed network of data servers and collectors Possible e-Environment Grid Services Architecture