Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Netbus: A Transparent Mechanism for Remote Device Access in Virtualized Systems Sanjay Kumar PhD Student Advisor: Prof. Karsten Schwan.
CA 714CA Midterm Review. C5 Cache Optimization Reduce miss penalty –Hardware and software Reduce miss rate –Hardware and software Reduce hit time –Hardware.
Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Performance Anomalies Within The Cloud 1 This slide includes content from slides by Venkatanathan Varadarajan and Benjamin Farley.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Introduction to DoC Private Cloud
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
Tanenbaum 8.3 See references
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
Wahid Bhimji University of Edinburgh J. Cranshaw, P. van Gemmeren, D. Malon, R. D. Schaffer, and I. Vukotic On behalf of the ATLAS collaboration CHEP 2012.
Testing Virtual Machine Performance Running ATLAS Software Yushu Yao Paolo Calafiura LBNL April 15,
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
Enabling Technologies for Distributed and Cloud Computing Dr. Sanjay P. Ahuja, Ph.D FIS Distinguished Professor of Computer Science School of.
Cluster currently consists of: 1 Dell PowerEdge Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
David Cameron Claire Adam Bourdarios Andrej Filipcic Eric Lancon Wenjing Wu ATLAS Computing Jamboree, 3 December 2014 Volunteer Computing.
David Cameron Riccardo Bianchi Claire Adam Bourdarios Andrej Filipcic Eric Lançon Efrat Tal Hod Wenjing Wu on behalf of the ATLAS Collaboration CHEP 15,
T3 report: - Oxford HEPSYSMAN Sean Brisbane June 2015.
MClock: Handling Throughput Variability for Hypervisor IO Scheduling in USENIX conference on Operating Systems Design and Implementation (OSDI ) 2010.
Sandor Acs 05/07/
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Monte Carlo Data Production and Analysis at Bologna LHCb Bologna.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
Introduction to virtualization
1 Worker Node Requirements TCO – biggest bang for the buck –Efficiency per $ important (ie cost per unit of work) –Processor speed (faster is not necessarily.
Arne Wiebalck -- VM Performance: I/O
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
Predrag Buncic (CERN/PH-SFT) Virtualizing LHC Applications.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
PROOF Benchmark on Different Hardware Configurations 1 11/29/2007 Neng Xu, University of Wisconsin-Madison Mengmeng Chen, Annabelle Leung, Bruce Mellado,
Status of BESIII Distributed Computing BESIII Workshop, Sep 2014 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
The CernVM Infrastructure Insights of a paradigmatic project Carlos Aguado Sanchez Jakob Blomer Predrag Buncic.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Jérôme Jaussaud, Senior Product Manager
The CernVM Project A new approach to software distribution Carlos Aguado Jakob Predrag
Predrag Buncic (CERN/PH-SFT) CernVM Status. CERN, 24/10/ Virtualization R&D (WP9)  The aim of WP9 is to provide a complete, portable and easy.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI CERN and HelixNebula, the Science Cloud Fernando Barreiro Megino (CERN IT)
Managing a growing campus pool Eric Sedore
Cloud computing and federated storage Doug Benjamin Duke University.
Performance analysis comparison Andrea Chierici Virtualization tutorial Catania 1-3 dicember 2010.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
 IO performance of ATLAS data formats Ilija Vukotic for ATLAS collaboration CHEP October 2010 Taipei.
A comparison between xen and kvm Andrea Chierici Riccardo Veraldi INFN-CNAF CCR 2009.
29/04/2008ALICE-FAIR Computing Meeting1 Resulting Figures of Performance Tests on I/O Intensive ALICE Analysis Jobs.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Atlas IO improvements and Future prospects
Title of the Poster Supervised By: Prof.*********
AWS Integration in Distributed Computing
Solid State Disks Testing with PROOF
Diskpool and cloud storage benchmarks used in IT-DSS
Bernd Panzer-Steindel, CERN/IT
Sharing Memory: A Kernel Approach AA meeting, March ‘09 High Performance Computing for High Energy Physics Vincenzo Innocente July 20, 2018 V.I. --
Comparing dual- and quad-core performance
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
OS Virtualization.
Presentation transcript:

Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction Analysis of MC/real data CPU bound – Conversion of data files (D3PD maker) Raw data (AOD/ESD) → flat ntuples (rootuples) Extreme I/O – Analysis of ntuples (root/proof) I/O bound+CPU combination Compare these programs in Real and Virtual Systems

Atlas T3 Virtual System design iSCSI VM master storage Dell blade server (openfiler) File server WAN /LAN VM Batch 3GB memory/VM, no user login USERS SUSE corei7 VM atlasgw Condor manager/ scheduler Proof master Proxy for cvmfs / coolDB SUSE corei7 VM 40+ VMs Batch nodes Switch User Desktops 12

Real/Virtual Comparison designs Dell PowerEdge R710 SUSE 11.1 /Xen kernel 16 cernvm (v1.6 Xen batch SL4) 2GB memory per VM Athena with CVMFS Local File system Bonnie++ disk benchmarks(inside VM) Dell PowerEdge R710 SL5 8 core HT real Machine 36GB memory total Athena locally installed Local File system Bonnie++ disk benchmarks Read PerChr: 67955K/s Read Block: K/s Random seeks: 428.9/sec Read PerChr: 55828K/s Read Block: K/s Random seeks: 410/sec 3

Athena Performance - CPU Bound For CPU bound jobs: 10% performance loss for virtual systems MC generation/simulation/reconstruction is CPU bound Performance scales with CPU speed Athena , full MC, 1000 events HT

D3PD Maker Performance - CPU+I/O Bound No I/O performance loss for virtual machines with ATLAS software Evidence for I/O performance (D3PD Maker) in virtual machines being better than in real machines (up to 20%) due to other limitations Local HD raid array with 250 MB/s read bandwidth (block) and 440 random seeks/sec can’t provide enough I/O, Performance peaks at 6 jobs; performance loss noticeable at 3 jobs relative to Virtual RAM disk. *All memory caches cleared before each run Egamma D3PD Maker, 1000 events HT

iostat from real machines when 8 parallel D3PD jobs on real/virtual

Proof performance