1 GCA Application in STAR GCA Collaboration Grand Challenge Architecture and its Interface to STAR Sasha Vaniachine presenting for the Grand Challenge.

Slides:



Advertisements
Similar presentations
Adam Jorgensen Pragmatic Works Performance Optimization in SQL Server Analysis Services 2008.
Advertisements

Database System Concepts and Architecture
COURSE: COMPUTER PLATFORMS
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
Introduction CSCI 444/544 Operating Systems Fall 2008.
Aug Arie Shoshani Particle Physics Data Grid Request Management working group.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Multi-criteria infrastructure for location-based applications Shortly known as: Localization Platform Ronen Abraham Ido Cohen Yuval Efrati Tomer Sole'
Grid Collector: Enabling File-Transparent Object Access For Analysis Wei-Ming Zhang Kent State University John Wu, Alex Sim, Junmin Gu and Arie Shoshani.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Overview of Database Languages and Architectures.
Conceptual Architecture of PostgreSQL PopSQL Andrew Heard, Daniel Basilio, Eril Berkok, Julia Canella, Mark Fischer, Misiu Godfrey.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
Data Center Infrastructure
GRID job tracking and monitoring Dmitry Rogozin Laboratory of Particle Physics, JINR 07/08/ /09/2006.
Operating System. Architecture of Computer System Hardware Operating System (OS) Programming Language (e.g. PASCAL) Application Programs (e.g. WORD, EXCEL)
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
 DATABASE DATABASE  DATABASE ENVIRONMENT DATABASE ENVIRONMENT  WHY STUDY DATABASE WHY STUDY DATABASE  DBMS & ITS FUNCTIONS DBMS & ITS FUNCTIONS 
1 Introduction to Database Systems. 2 Database and Database System / A database is a shared collection of logically related data designed to meet the.
◦ What is an Operating System? What is an Operating System? ◦ Operating System Objectives Operating System Objectives ◦ Services Provided by the Operating.
Master Thesis Defense Jan Fiedler 04/17/98
A User’s Introduction to the Grand Challenge Software STAR-GC Workshop Oct 1999 D. Zimmerman.
RISICO on the GRID architecture First implementation Mirko D'Andrea, Stefano Dal Pra.
Jerome Lauret RCF Advisory Committee Meeting The Data Carousel what problem it’s trying to solve the data carousel and the grand challenge the bits and.
File and Object Replication in Data Grids Chin-Yi Tsai.
1 New developments in the HENP-GC HENP-GC Collaboration New Capabilities in the HENP Grand Challenge Storage Access System and its Application at RHIC.
Grand Challenge MDC1 Plans Doug Olson Nuclear Science Division, Berkeley Lab for the HENP-GC Collaboration RCF Meeting September 24, 1998.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
STAR C OMPUTING STAR Computing Infrastructure Torre Wenaus BNL STAR Collaboration Meeting BNL Jan 31, 1999.
Computer Science Research and Development Department Computing Sciences Directorate, L B N L 1 Storage Management and Data Mining in High Energy Physics.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
4 - 1 Copyright © 2006, The McGraw-Hill Companies, Inc. All rights reserved. Computer Software Chapter 4.
Using Bitmap Index to Speed up Analyses of High-Energy Physics Data John Wu, Arie Shoshani, Alex Sim, Junmin Gu, Art Poskanzer Lawrence Berkeley National.
An RTAG View of Event Collections, and Early Implementations David Malon ATLAS Database Group LHC Persistence Workshop 5 June 2002.
Grand Challenge and PHENIX Report post-MDC2 studies of GC software –feasibility for day-1 expectations of data model –simple robustness tests –Comparisons.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
CLASS Information Management Presented at NOAATECH Conference 2006 Presented by Pat Schafer (CLASS-WV Development Lead)
INTRODUCTION TO DBS Database: a collection of data describing the activities of one or more related organizations DBMS: software designed to assist in.
Process Architecture Process Architecture - A portion of a program that can run independently of and concurrently with other portions of the program. Some.
STAR C OMPUTING STAR Analysis Operations and Issues Torre Wenaus BNL STAR PWG Videoconference BNL August 13, 1999.
STAR Collaboration, July 2004 Grid Collector Wei-Ming Zhang Kent State University John Wu, Alex Sim, Junmin Gu and Arie Shoshani Lawrence Berkeley National.
January 26, 2003Eric Hjort HRMs in STAR Eric Hjort, LBNL (STAR/PPDG Collaborations)
Virtualization and Databases Ashraf Aboulnaga University of Waterloo.
NOVA A Networked Object-Based EnVironment for Analysis “Framework Components for Distributed Computing” Pavel Nevski, Sasha Vanyashin, Torre Wenaus US.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Grand Challenge in MDC2 D. Olson, LBNL 31 Jan 1999 STAR Collaboration Meeting
Copyright 2007, Information Builders. Slide 1 Machine Sizing and Scalability Mark Nesson, Vashti Ragoonath June 2008.
STAR C OMPUTING Plans for Production Use of Grand Challenge Software in STAR Torre Wenaus BNL Grand Challenge Meeting LBNL 10/23/98.
1 fileCatalog, tagDB and GCA A. Vaniachine Grand Challenge STAR fileCatalog, tagDB and Grand Challenge Architecture A. Vaniachine presenting for the Grand.
Dispatching Java agents to user for data extraction from third party web sites Alex Roque F.I.U. HPDRC.
1 Grid in STAR: Needs and Plans A. Vaniachine STAR Grid Components in STAR: Experience, Needs and Plans A. Vaniachine Lawrence Berkeley National Laboratory.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
PPDG meeting, July 2000 Interfacing the Storage Resource Broker (SRB) to the Hierarchical Resource Manager (HRM) Arie Shoshani, Alex Sim (LBNL) Reagan.
09/13/04 CDA 6506 Network Architecture and Client/Server Computing Peer-to-Peer Computing and Content Distribution Networks by Zornitza Genova Prodanoff.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
1 Efficient Data Access for Distributed Computing at RHIC A. Vaniachine Efficient Data Access for Distributed Computing at RHIC A. Vaniachine Lawrence.
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
The HENP Grand Challenge Project and initial use in the RHIC Mock Data Challenge 1 D. Olson DM Workshop SLAC, Oct 1998.
OGSA-DAI.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Database System Concepts and Architecture.
DUCKS – Distributed User-mode Chirp-Knowledgeable Server
Query Processing.
Presentation transcript:

1 GCA Application in STAR GCA Collaboration Grand Challenge Architecture and its Interface to STAR Sasha Vaniachine presenting for the Grand Challenge collaboration ( March 27, 2000 STAR MDC3 Analysis Workshop

2 GCA Application in STAR GCA Collaboration Outline GCA Overview STAR Interface: –fileCatalog –tagDB –StGCAClient Current Status Conclusion

3 GCA Application in STAR GCA Collaboration GCA: Grand Challenge Architecture An order-optimized prefetch architecture for data retrieval from multilevel storage in a multiuser environment Queries select events and specific event components based upon tag attribute ranges –query estimates are provided prior to execution –collections as queries are also supported Because event components are distributed over several files, processing an event requires delivery of a “bundle” of files Events are delivered in an order that takes advantage of what is already on disk, and multiuser policy-based prefetching of further data from tertiary storage GCA intercomponent communication is CORBA- based, but physicists are shielded from this layer

4 GCA Application in STAR GCA Collaboration Participants NERSC/Berkeley Lab –L. Bernardo, A. Mueller, H. Nordberg, A. Shoshani, A. Sim, J. Wu Argonne –D. Malon, E. May, G. Pandola Brookhaven Lab –B. Gibbard, S. Johnson, J. Porter, T. Wenaus Nuclear Science/Berkeley Lab –D. Olson, A. Vaniachine, J. Yang, D. Zimmerman

5 GCA Application in STAR GCA Collaboration Problem There are several –Not all data fits on disk ($$) Part of 1 year’s DST’s fit on disk –What about last year, 2 year’s ago? –What about hits, raw? –Available disk bandwidth means data read into memory must be efficiently used ($$) don’t read unused portions of the event Don’t read events you don’t need –Available tape bandwidth means files read from tape must be shared by many users, files should not contain unused bytes ($$$$) –Facility resources are sufficient only if used efficiently Should operate steady-state (nearly) fully loaded

6 GCA Application in STAR GCA Collaboration Bottleneks Keep recently accessed data on disk, but manage it so unused data does not waste space. Try to arrange that 90% of file access is to disk and only 10% are retrieved from tape.

7 GCA Application in STAR GCA Collaboration Solution Components Split event into components across different files so that most bytes read are used –Raw, tracks, hits, tags, summary, trigger, … Optimize file size so tape bandwidth is not wasted –1GB files,  means different # of events in each file Coordinate file usage so tape access is shared –Users select all files at once –System optimizes retrieval and order of processing Use disk space & bandwidth efficiently –Operate disk as cache in front of tape

8 GCA Application in STAR GCA Collaboration STAR Event Model T. Ullrich, Jan. 2000

9 GCA Application in STAR GCA Collaboration Analysis of Events 1M events = 100GB – 1TB –100 – 1000 files (or more if not optimized) Need to coordinate event associations across files Probably have filtered some % of events –Suppose 25% failed cuts after trigger selection Increase speed by not reading these 25% Run several batch jobs for same analysis in parallel to increase throughput Start processing with files already on disk without waiting for staging from HPSS

10 GCA Application in STAR GCA Collaboration In the Details –Range-query language, or query by event list “NLa>700 && run=101007”, {e1,r101007;e3,r101007;e7;r …} Select components: dst, geant, … –Query estimation # events, # files, # files on disk, how long, … Avoid executing incorrect queries –Order optimization Order of events you get maximizes file sharing and minimizes reads from HPSS –Policies # of pre-fetch, # queries/user, # active pftp connections, … Tune behavior & performance –Parallel processing Submitting same query token in several jobs will cause each job to process part of that query

11 GCA Application in STAR GCA Collaboration Organization of Events in Files Event Identifiers (Run#, Event#) Event components Files File bundle 1File bundle 2File bundle 3

12 GCA Application in STAR GCA Collaboration GCA System Overview Client GCA STACS Staged event files Event Tags (Other) disk-resident event data Index HPSS pftp fileCatalog Client

13 GCA Application in STAR GCA Collaboration STACS: STorage Access Coordination System Bit-Sliced Index File Catalog Policy Module Query Status, CacheMap Query Monitor List of file bundles and events Cache Manager Requests for file caching and purging Query Estimator Estimate pftp and file purge commands File Bundles, Event lists Query

14 GCA Application in STAR GCA Collaboration database Interfacing GCA to STAR GC System StIOMaker fileCatalog tagDB Query Monitor Cache Manager Query Estimator STAR Software Index Builder gcaClient FileCatalog IndexFeeder GCA Interface

15 GCA Application in STAR GCA Collaboration Limiting Dependencies STAR-specific IndexFeeder server –IndexFeeder read the “tag database” so that GCA “index builder” can create index FileCatalog server –FileCatalog queries the “file catalog” database of the experiment to translate fileID to HPSS & disk path & GCA-dependent gcaClient interface –Experiment sends queries and get back filenames through the gcaClient library calls

16 GCA Application in STAR GCA Collaboration Eliminating Dependencies StIOMaker ROOT + STAR Software > StGCAClient libGCAClient.so libStCGAClient.so (implementation) /opt/star/lib CORBA + GCA software libOB.so ROOT

17 GCA Application in STAR GCA Collaboration STAR fileCatalog Database of information for files in experiment. File information is added to DB as files are created. Source of File information – for the experiment – for the GCA components (Index, gcaClient,...)

18 GCA Application in STAR GCA Collaboration Job monitoring system Cataloguing Analysis Workflow fileCatalog Job configuration manager

19 GCA Application in STAR GCA Collaboration GCA MDC3 Integration Work March 2000

20 GCA Application in STAR GCA Collaboration Status Today MDC3 Index –6 event components: –179 physics tags: –120K events –8K files Updated daily... fzd geant dst tags runco hist StrangeTag FlowTag ScaTag

21 GCA Application in STAR GCA Collaboration User Query ROOT Session:

22 GCA Application in STAR GCA Collaboration STAR Tag Database Access

23 GCA Application in STAR GCA Collaboration Problem:SELECT NLa>700 ntuple index read selected events read all events

24 GCA Application in STAR GCA Collaboration STAR Tag Structure Definition Selections like  qxa²+qxb² > 0.5 can not use index

25 GCA Application in STAR GCA Collaboration Conclusion GCA developed a system for optimized access to multi-component event data files stored in HPSS. General CORBA interfaces are defined for interfacing with the experiment. A client component encapsulates interaction with the servers and provides an ODMG-style iterator. Has been tested up to 10M events, 7 event components, 250 concurrent queries. Is currently being integrated with the STAR experiment ROOT-based I/O analysis system.