ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.

Slides:



Advertisements
Similar presentations
WP2: Data Management Gavin McCance University of Glasgow November 5, 2001.
Advertisements

DELOS Highlights COSTANTINO THANOS ITALIAN NATIONAL RESEARCH COUNCIL.
Data Management Expert Panel - WP2. WP2 Overview.
The Anatomy of the Grid: An Integrated View of Grid Architecture Carl Kesselman USC/Information Sciences Institute Ian Foster, Steve Tuecke Argonne National.
Virtual Data and the Chimera System* Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science.
High Performance Computing Course Notes Grid Computing.
1 The LDCM Grid Prototype Infrastructure Demonstration Jeff Lubelczyk X45225 June 24, 2004.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Application of GRID technologies for satellite data analysis Stepan G. Antushev, Andrey V. Golik and Vitaly K. Fischenko 2007.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Workload Management Massimo Sgaravatto INFN Padova.
Status of Globus activities within INFN (update) Massimo Sgaravatto INFN Padova for the INFN Globus group
Globus Computing Infrustructure Software Globus Toolkit 11-2.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Grappa: Grid access portal for physics applications Shava Smallen Extreme! Computing Laboratory Department of Physics Indiana University.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
CoG Kit Overview Gregor von Laszewski Keith Jackson.
WP9 Resource Management Current status and plans for future Juliusz Pukacki Krzysztof Kurowski Poznan Supercomputing.
David Adams ATLAS ATLAS Distributed Analysis David Adams BNL March 18, 2004 ATLAS Software Workshop Grid session.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
The Data Grid: Towards an Architecture for the Distributed Management and Analysis of Large Scientific Dataset Caitlin Minteer & Kelly Clynes.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
File and Object Replication in Data Grids Chin-Yi Tsai.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
The Anatomy of the Grid: An Integrated View of Grid Architecture Ian Foster, Steve Tuecke Argonne National Laboratory The University of Chicago Carl Kesselman.
GriPhyN Status and Project Plan Mike Wilde Mathematics and Computer Science Division Argonne National Laboratory.
Ruth Pordes, Fermilab CD, and A PPDG Coordinator Some Aspects of The Particle Physics Data Grid Collaboratory Pilot (PPDG) and The Grid Physics Network.
Grid Workload Management Massimo Sgaravatto INFN Padova.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Data Replication Service Sandeep Chandra GEON Systems Group San Diego Supercomputer Center.
1 Grid Related Activities at Caltech Koen Holtman Caltech/CMS PPDG meeting, Argonne July 13-14, 2000.
Major Grid Computing Initatives Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer Science The.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
Tools for collaboration How to share your duck tales…
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
The Earth System Grid (ESG) Computer Science and Technologies DOE SciDAC ESG Project Review Argonne National Laboratory, Illinois May 8-9, 2003.
The GriPhyN Planning Process All-Hands Meeting ISI 15 October 2001.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Introduction to Grids By: Fetahi Z. Wuhib [CSD2004-Team19]
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Replica Management Kelly Clynes. Agenda Grid Computing Globus Toolkit What is Replica Management Replica Management in Globus Replica Management Catalog.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GriPhyN Project Paul Avery, University of Florida, Ian Foster, University of Chicago NSF Grant ITR Research Objectives Significant Results Approach.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Globus: A Report. Introduction What is Globus? Need for Globus. Goal of Globus Approach used by Globus: –Develop High level tools and basic technologies.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
David Adams ATLAS Datasets for the Grid and for ATLAS David Adams BNL September 24, 2003 ATLAS Software Workshop Database Session CERN.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
David Adams ATLAS ATLAS Distributed Analysis and proposal for ATLAS-LHCb system David Adams BNL March 22, 2004 ATLAS-LHCb-GANGA Meeting.
1 CMS Virtual Data Overview Koen Holtman Caltech/CMS GriPhyN all-hands meeting, Marina del Rey April 9, 2001.
Integrated Modeling Environment System Engineering Seminar Johnny Medina / Code 531 Chris Stone / Code 531 / Constellation Software Engineering.
Current Globus Developments Jennifer Schopf, ANL.
Strategies for NIS Development
Wide Area Workload Management Work Package DATAGRID project
Presentation transcript:

ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader scope u E.g., international collaboration u E.g., GriPhyN in the large (GriPhyN-2) u E.g., Terascale l Creating a national cyberinfrastructure u What is our appropriate role

ARGONNE  CHICAGO Ian Foster Discussion Points l Outreach to other disciplines u Biology, NEES, … l Virtual data toolkit u Inclusive or focused? u Resource issue, again l Achieving critical mass of resources to deliver on the complete promise

ARGONNE  CHICAGO Ian Foster Planning l Review of Year 1 milestones l Top 10 research challenges l Demonstration projects l Research projects + goals l Workshops

ARGONNE  CHICAGO Ian Foster Year 1 Milestones: Virtual Data l Develop basic information model to represent data elements, relationships between different data types, characteristics of data elements l Develop protocols for storing, discovering, and retrieving these models l Design and develop tools for creating, accessing and manipulating these models by interactive tools, & planning and scheduling tools l Deploy centralized metadata and replica catalog services. Develop tools for managing catalogs

ARGONNE  CHICAGO Ian Foster Year 1 Milestones: Request Planning Develop generic models for representing execution plans. Define API and tools for constructing, traversing, and manipulating plan data structures. Develop protocols and formats for storing and exchanging execution plans. Develop uniform policy representation for code, data, and resource access. Develop a set of global and local policy scenarios that reflect the requirements of the user communities of the four physics experiments. l Develop simple optimization heuristics. Initial thrust will be on data movement only and focus on the use of alternative, or branching plans to compensate for both resource failure and changes in resource performance. Implement planning heuristics in prototype planning module. Evaluate performance of alternatives with simulation and model based studies, as well as execution on GriPhyN testbed.

ARGONNE  CHICAGO Ian Foster Year 1 Milestones: Request Execution l Develop and evaluate a task control language capable of capturing the requirements, preferences and dependencies of a PVDG request. Implement prototype of an interpreter to a basic subset of the language l Enhance the "Gang Matching" capabilities of the ClassAd language and add these enhancements to the run-time support library l Explore ways to enhance the ClassAd language to support events and triggers l Develop a protocol for information exchange between the execution and planning agents

ARGONNE  CHICAGO Ian Foster Year 1 Milestones: Virtual Data Toolkit l VDT-1 (Basic Grid Services) provides an initial set of grid enabling services and tools, including security, information, metadata, CPU scheduling, and data transport. VDT-1 will support efficient operation on O(10 TB) datasets, O(100) CPUs, and O(100 MB/s) wide area networks and will build extensively on existing technology.

ARGONNE  CHICAGO Ian Foster Year 1 Milestones: CMS & LIGO l CMS u Build basic services and 1-2 prototype Tier 2 centers u Complete High Level Trigger milestones and perform studies with ORCA, the CMS object- oriented reconstruction & analysis software l LIGO u Develop a cataloging approach for data access methods & data location (metadata definition, design) u Develop an access and use model for LIGO data across the GriPhyN system

ARGONNE  CHICAGO Ian Foster Year 1 Milestones: ATLAS and SDSS l SDSS u Build a prototype distributed analysis system l ATLAS u Connect the Athena analysis framework to a set of prototype virtual data services l Start with Globus replica catalog service l Athena EventSelector service to a replica catalog (reading) l Athena Replica catalog update service u Testing of basic file replication and transport using 500 GB testbeam data sets. u Develop Condor interface to the ATLAS testbed u Build basic services and 1-2 prototype Tier 2 centers

ARGONNE  CHICAGO Ian Foster Schedule FebDoc: Data grid reference architecture v1 Doc: Virtual data architecture v1 Doc: LIGO application summary and virtual data requirements v1 Doc: CMS application summary and virtual data requirements v1 MarDoc: CAS policy architecture v0 AprVDT: GSI, GRAM, GridFTP, replica catalog, Condor-G Doc: SDSS application summary and virtual data requirements v1 App: CMS data analysis v1 [tests GSI, GRAM, Condor-G] MayDocs: Architecture, virtual data, appln requirements v2 JuneDoc: Data grid failure models and fault management research plan Doc: GriPhyN simulation architecture and integrated simulation R&D plan Doc: Proto: CAS policy architecture prototype App: LIGO data analysis v1 [tests replica catalog]

ARGONNE  CHICAGO Ian Foster Schedule JulyProto: Virtual data catalog Proto: Request scheduler App: ATLAS data analysis AugTbed: Initial joint testbed with EDG SeptTbed: Joint testbed with EDG established with UC, ISI, CIT resources App: CMS data analysis over EDG-GriPhyN[-PPDG?] testbed

ARGONNE  CHICAGO Ian Foster Breakouts/Workshops? l Virtual data representations u Naming, etc. l Simulation strategies and tools u UC, CIT, UCB, others? l Architecture u What are the essential (and missing) pieces l Failure models

ARGONNE  CHICAGO Ian Foster Workloads l Three types: queries, objects, files l Koen Holtman’s modeling work l LIGO workloads (says Valerie)