BesIII Computing Environment ------Computer Centre, IHEP, Beijing. BESIII Computing Environment.

Slides:



Advertisements
Similar presentations
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
Fabric Management for CERN Experiments Past, Present, and Future Tim Smith CERN/IT.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Spending Plans and Schedule Jae Yu July 26, 2002.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
BES III Computing at The University of Minnesota Dr. Alexander Scott.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
Silicon Module Tests The modules are tested in the production labs. HIP is is participating in the rod quality tests at CERN. The plan of HIP CMS is to.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
ATLAS Midwest Tier2 University of Chicago Indiana University Rob Gardner Computation and Enrico Fermi Institutes University of Chicago WLCG Collaboration.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
LHC Computing re-costing for
Proposal for the LHCb Italian Tier-2
ALICE Computing Upgrade Predrag Buncic
Computing Infrastructure for DAQ, DM and SC
Gridifying the LHCb Monte Carlo production system
LHCb thinking on Regional Centres and Related activities (GRIDs)
Development of LHCb Computing Model F Harris
The LHCb Computing Data Challenge DC06
Presentation transcript:

BesIII Computing Environment Computer Centre, IHEP, Beijing. BESIII Computing Environment

BesIII Computing Environment Computer Centre, IHEP, Beijing. 1.Computing in IHEP 2.BESIII Computing Pre-Grid

1.Computing in IHEP BesIII Computing Environment Computer Centre, IHEP, Beijing.

Event size (kB) Number of EventData volume ( GB ) Raw data2~1.5× Rec. data6~1.5× DST data0.6~1.5× Current BESII Computing The total real and Monte Carlo data produced by BES and BESII are about 13 TB.

The BESII computing environment is built on: PC/Linux/Cluster NFS/AFS openPBS/GangLia CERN/CASTOR Current BESII Computing

Schematic view of BESII computing environment Current BESII Computing

The BESII Computing Environment: 6 file servers (2 GB memory and extra gigabit network adapter for each). 7.3 TB disk space. 64 worker nodes (Pentium IV 2.6 and 2.8 GHz CPUs and 1 GB memory for each). CISCO and Cabletron Switches. Current BESII Computing

2.BesIII Computing BesIII Computing Environment Computer Centre, IHEP, Beijing.

Event size (kB) Number of EventData volume ( TB ) Raw data12~1.× Rec. data24~1.× DST data2~1.× The BESIII experiment is planed to begin to collect data in The peak luminosity of the BEPCII at the J/ Ψ resonance will be about cm -2 s -1, with the peak event rate at 3000 Hz. The J/ Ψ production in the first year will collect about 1×10 10 J/Ψ events. In the following years the production for Ψ’, D and Ds studies one will also totally collect about another 1×10 10 events. BesIII Computing Environment Estimated data size of BESIII experiment in the first production year

BesIII Computing Environment Data volume ( TB ) Storage medium J/ Ψ eventΨ ’, D, Ds event Raw data 120 Tape Rec. data 240×2 Tape DST data 20×2 Disk MC 240 Tape MC-Rec. 240×2 Tape MC-DST 20×2 Disk Sum 2,800 ( TB ) The number of events of the MC generated data will be 2 times as the real data. 2 versions of the reconstructed real data will be stored, while the simulated data will store only one copy Estimated total amount of data of BESIII in it’s whole production life time

1.It is very complicated and difficult to estimate the CPU number needed for the whole data production works, MC simulation works, and physics analysis, since there’s no direct experience of the data processing from BESIII it self at present, and the IT technology develops so fast. 2.The total estimated CPU power needed for data process and data analysis for BESIII experiment will be accumulated up to 1,000 to 2,000 the fastest CPUs at present. 3.This number is a very rough estimation, and will vary with the knowledge of the BESIII data processing, and with the IT trend in the following years. BesIII Computing Environment

Data flow of BESIII experiment

and other services: HTTP service Mail service AFS Home directory service Application service … BesIII Computing Environment The BesIII Computing Environment will provide 3 main services: 1.Public interactive Login service 2.Batch job service 3.Storage service

BESIII Analysis Public Logon Service Platform Physicist user Batch cluster Mass storage Uniform environment variables Uniform file access Uniform tool access Uniform library access Load Balancing System Data access Computing MC Reconstruction Analysis PB GB/MB Computing Mode Pre-Grid

The operating system and tools software of the BESIII computing environment will mainly use free software. BesIII Computing Environment One should note the risk when using free software: performance and stability.

------Computer Centre, IHEP, Beijing. OpenPBS vs. PBS Pro BesIII Computing Environment

PBS Pro vs. LSF Computer Centre, IHEP, Beijing. BQS? CONDOR? …… ? BesIII Computing Environment

Physical part:Logical part: 1.Computing Element 2.Storage Element 3.Network Element 4.Security Element I.Security Policy II.Administration Policy III.User Policy Construction/ Upgrading Maintenance/ Supporting BesIII Computing Environment

The BESIII computing environment is planed to be constructed step by step during 2005 – Jan., 2005—Dec., 2005: Study on each of the computing components, CPU, disk, Tape, SLC3 linux system, batch job management software, etc. Design a computing prototype, test the hardware and software components in an integrated environment. 2.Jan., 2006—Dec., 2006: Build a test-bed according to the prototype with full functions, with the 1/10 scale of the complete BESIII computing environment. Do the first Data Challenge and detailed system tests. Extend the test-bed to a 1/2 scale and then do the second Data Challenge. 3.Jan., 2007— : Extend the system to the full scale and provide full computing services needed by the first year data production of BESIII experiment. Extend the system along with the increasingly requirements of the experiment. BesIII Computing Environment

BESIII Prototype BesIII Computing Environment Test the hardware Test the software Data Challenge Test Train people … On the prototype, we need:

BES On line system Offline PC Farm YBJ CERN (CMS&Atlas) Computing Center pku CASTOR Physics data cache Tier 1 Tier 2 Tier 3 This diagram only Shows HEP DataGrid in China 10Gbps E-science BesIII Computing Environment 1.The BESIII computing environment will be constructed in an open architecture, which is robust, secure, scalable and extendable, and is open friendly to be integrated with other new technologies, such as Grid. 2.To share the computing resources with other BESIII collaborators, and other HEP experiments. Grid?

Thanks

------Computer Centre, IHEP, Beijing. Suppose: 10 physicists access the 1/100 of the whole data and do analysis, , want to get results in 24 小时: 55 SI95 J/Ψ J/Ψ + Ψ ’ : 2.6*10 4 * 2 --> 6 *10 4 * 2 ?)  peak requirement J/Ψ: Estimation BesIII Computing Environment

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. 3. IT Technology Trend

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. PIV 2.8G =:1000 SI2K, <1000 USD ( 包括主板、内存、 CPU 、硬盘、机箱 ) PASTA, the Technology Tracking Team for Processors, Memory, Storage and Architectures, LCB, CERN. 1 SI95 ~ 10 SI2K~10 CU

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. 3.2GHZ 1 SI95 ~ 10 SI2K~10 CU

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing SI2K 1 SI95 ~ 10 SI2K~10 CU

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing.

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. AMD:Advanced Micro Devices, Epox 8KHA+ Motherboard, AMD Athlon (TM) XP Intel:Dell Precision Workstation 360 (P4) 系列 Data from Chinese market, Oct. 2004

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. Storage

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. Storage

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. 数据来源:媒体公告价格。 Storage

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. Comparison of current different Tape Technology Storage

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. Comparison of different Tape Medias. LTO2?

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. Market Response

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. 4.Cost Estimate

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. Electrical Power supply: 1.Computing box: 300w * 500 (box) = 150 KW 2.Storage system: 10 KW 3.Cooling system: 10 KW 170 KW Room Space: 1.Computing box: 150cm*50cm*500(box)=375 m 2 /(3 layer) =: 125m 2 2.Storage system: 50m 2 3.Cooling system: 10m 2 ~200 m 2

PC CERN

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. Man Power: 1.R&D group 5(soft)+5(hard) 2.Maintenance&Technical Support 5(soft)+10(hard) 30??

BesIII Computing Technical Proposal Computer Centre, IHEP, Beijing. 5.Load Balancing & Cluster

lxplus001 lxbatch001 load balancing AFS/LSF? disk001 rfio tape001 rfio disk001 tape001 Batch Servers(FARM) Interactive Servers ( LxPlus) Disk Servers Data Storage(CASTOR) Tape Servers IHEP future computing environment Junior CERN?

LVS

IHEP Public Linux Logon Platform(LxPlus?) LVS