Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing plans from UKDØ. Iain Bertram 8 November 2000.

Similar presentations


Presentation on theme: "Computing plans from UKDØ. Iain Bertram 8 November 2000."— Presentation transcript:

1 Computing plans from UKDØ. Iain Bertram 8 November 2000

2 Wednesday 8 NovemberIain Bertram - UKDØ Computing2 DØ Computing Platform zTwo supported operating Systems yIRIX yLinux (Fermilab version of Redhat 6.1, 7.0 under development) zNew Detector – New Experiment – New Code yFully Object Orientated C++. xRequired Commercial Compiler (KAI) due to C++ ANSII compliance. xCode Framework based on SoftRelTools (Portable) xStandard Event Data Model and access framework built. xComplete DØ code structure successfully imported to UK. zData Analysis Tool - Root

3 Wednesday 8 NovemberIain Bertram - UKDØ Computing3 DØ Data Storage zData Interface via SAM model (see later). zData Storage Format – EVPACK yDØOM Front End Interface to allow change of storage media and interface with databases. yBased on DSPACK (NA49) ySelf describing data zMetadata Storage yLuminosity, Event Catalogues, Triggers, Run Conditions, Calibration. yOracle Relational Database. yExport Format still under consideration, combination EVPACK, FLAT files, or network access to central site.

4 Wednesday 8 NovemberIain Bertram - UKDØ Computing4 DØ Data Samples zDØ begins data taking 1 March 2001. zRequire ~250 CPU’s to keep up with data processing (Will have 200-400 processors at FNAL). Trigger Rate50 Hz (20 Hz DC) Raw Data Size250 kb Raw Data Storage~150 TB/year (Run 1 Storage)~60 TB Events/day~2 Mevents

5 Wednesday 8 NovemberIain Bertram - UKDØ Computing5 Data Analysis zStore Data of different sizes: Data TypeEvent SizeStorage/year Raw250 kB150 TBTape Archive Full Record Reco’d Data ~300-350 kB21 TB10% of Sample Data Summary75 kB45-90 TB100% on Tape Thumbnail10 kB6 TB 100% on Disk zThumbnail for basic analysis and event selection. zData Summary contains enough information for almost all analyses. zFull record mainly for Debugging purposes

6 Wednesday 8 NovemberIain Bertram - UKDØ Computing6 DØ Monte Carlo Production zPlan to generate ALL MC events off-site: yCurrently 1 CPU can fully simulate ~400-500 events/day. yTotal ~500 CPU’s at beginning of 2000 yGenerate 75M-100M events/year. Location# CPU’s NIKHEF100 UTA64 Lyon*100 Boston Univ.*O2000 (192) Prague~15 Lancaster200 TATA (Bombay)proposal Rioproposal UK 33% of Current *Not Completely DØ

7 Wednesday 8 NovemberIain Bertram - UKDØ Computing7 MC Data Storage Requirements zMC Produced, Stored, and Analysed at Remote Sites – Only transport summary information to central sites. yEvent data base maintained centrally zMC Storage y0.5–0.75 MB/event (more during commissioning phase, up to 4MB) y50 TB/year (up to 400!) yUK Share – 40%: 20 TB zAnalysis framework and shared operating systems allow jobs to be run remotely and developed locally yAlready have common MC generation software framework implemented.

8 Wednesday 8 NovemberIain Bertram - UKDØ Computing8 UKDØ Computing Facility zJREI Award Granted at start 2000 zAIM: Central Computing facility to generate MC data and to provide UK access to data sets of interest. zTender Awarded to Workstations UK zDelivery Date: End November 2000 zFully Commissioned by early 2001 yData Taking 1 March 2001.

9 Wednesday 8 NovemberIain Bertram - UKDØ Computing9 UKDØ Computing Facility Lancaster University UK DØ Computing Facility 1 TB HD ~20 TB Tape Archive ~200 CPU Multiprocessor Super-Janet University Workstations FNAL Data Source Other non-UK Facilities Air Shipping of Tapes possible! RAL Datastore

10 Wednesday 8 NovemberIain Bertram - UKDØ Computing10 UKDØ Computing Kit zWorkstations UK z2x2 CPU Controller Nodes z196 Worker CPU’s z1TB expandable Disk Cache zADIC AMLJ ~30 TB using Mammoth II Media. yAlternative Media Worker 196 Worker CPUs Switch Controller Node Switch 500 GB Bulkserver Tape Library Capacity ~ 30 TB k£11/30 TB Worker Controller Node 500 GB Bulkserver 100 MB/s Ethernet 1000 MB/s Ethernet Fiber

11 Wednesday 8 NovemberIain Bertram - UKDØ Computing11 Lancaster Data Access via SAM zSAM sits on top of local storage, in DØ’s case ENSTORE. Can change storage solution. Tape Import and Production Data Enstore SAM Access (sync) Project request File Request Data Transfer Volume info SAM Metadata Export SAM Metadata File Import Enstore Metadata Tape Export

12 Wednesday 8 NovemberIain Bertram - UKDØ Computing12 DØ: SAM Data Access zSam stations provide file stagers, and disk cache management. zAnalysis Projects are run under a station using allocated resources. zThe “central-analysis” station runs on D0mino zWork is being done to test stations running outside FNAL. For example, in France, the “ccin2p3-analysis” station has been run under development.

13 Wednesday 8 NovemberIain Bertram - UKDØ Computing13 SAM Data Access Station A Mass Storage System Global Optimizer Project Master Station B Station C Station D Station E Station F Consumer/ Producer Project User & Admin. Interface (API and GUI) DB and Information Servers Project Consumer/ Producer Consumer/ Producer Consumer/ Producer Consumer/ Producer

14 Wednesday 8 NovemberIain Bertram - UKDØ Computing14 Remote SAM Installation zSam is designed to be highly scalable. zStation support at remote sites is anticipated to be be routine in the future. zHeavily used data can be cached in the stations at remote sites. zFeatures like parallel ftp will be built into the transfer mechanism and moving data to and from remote sites will be as simple as “sam run project” and “sam store”. zData produced at remote sites can be stored in their local tape management systems and the locations recorded in sam. Interfaces to their local sam stations will make the data accessible anywhere a sam station exists.

15 Wednesday 8 NovemberIain Bertram - UKDØ Computing15 UK Access to Data zAccess to data will require setting up local SAM station. yAll DØ data will be available yCan be used for storing analysis sets – reproducible analyses. zDevelop tools to allow analysis jobs to be run on UKDØ computing facility as required.

16 Wednesday 8 NovemberIain Bertram - UKDØ Computing16 DØ Data Grid: Data LocationsBoston Tata Rio

17 Wednesday 8 NovemberIain Bertram - UKDØ Computing17 Conclusions zUKDØ will have major impact on DØ computing. yWorld Wide Grid (2-3 continents) yAll DØ data available in UK yFrequently used data sets can be stored locally zLancaster Farm designed to be expandable yTape Library 30 – 600 TB yMultiple Media Allowed zLook to upgrade CPU’s, Hard Disks, 2002.


Download ppt "Computing plans from UKDØ. Iain Bertram 8 November 2000."

Similar presentations


Ads by Google