Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Off-line Computing for the AIACE and HERMES.

Slides:



Advertisements
Similar presentations
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Advertisements

1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
Chapter 5: Server Hardware and Availability. Hardware Reliability and LAN The more reliable a component, the more expensive it is. Server hardware is.
Computing for AIACE, HERMES and PANDA: status and future directions Federico Ronchetti Laboratori Nazionali di Frascati Computing for Hadronic Physics.
Tunis, Tunisia, 28 April 2014 Business Values of Virtualization Mounir Ferjani, Senior Product Manager, Huawei Technologies 2.
MCITP Guide to Microsoft Windows Server 2008 Server Administration (Exam #70-646) Chapter 11 Windows Server 2008 Virtualization.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Virtual Desktop Infrastructure Solution Stack Cam Merrett – Demonstrator User device Connection Bandwidth Virtualisation Hardware Centralised desktops.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
1. Outline Introduction Virtualization Platform - Hypervisor High-level NAS Functions Applications Supported NAS models 2.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Design & Management of the JLAB Farms Ian Bird, Jefferson Lab May 24, 2001 FNAL LCCWS.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
VirtualBox What you need to know to build a Virtual Machine.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
Grid MP at ISIS Tom Griffin, ISIS Facility. Introduction About ISIS Why Grid MP? About Grid MP Examples The future.
Types of Operating Systems
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Virtualization for the LHCb Online system CHEP Taipei Dedicato a Zio Renato Enrico Bonaccorsi, (CERN)
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
VMware vSphere Configuration and Management v6
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
NICA control system, beam diagnostics V.Andreev, E.Gorbachev, A.Kirichenko, D. Monakhov, S. Romanov, G.Sedykh, T. Rukoyatkina, V.Volkov VBLHEP, JINR, Dubna.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Status of the new NA60 “cluster” Objectives, implementation and utilization NA60 weekly meetings Pedro Martins 03/03/2005.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Enabling Technologies for Distributed Computing Dr. Sanjay P. Ahuja, Ph.D. Fidelity National Financial Distinguished Professor of CIS School of Computing,
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Reliability of KLOE Computing Paolo Santangelo for the KLOE Collaboration INFN LNF Commissione Scientifica Nazionale 1 Roma, 13 Ottobre 2003.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
1 Electronics Status Trigger and DAQ run successfully in RUN2006 for the first time Trigger communication to DRS boards via trigger bus Trigger firmware.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
Tackling I/O Issues 1 David Race 16 March 2010.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Component 8/Unit 1bHealth IT Workforce Curriculum Version 1.0 Fall Installation and Maintenance of Health IT Systems Unit 1b Elements of a Typical.
This courseware is copyrighted © 2016 gtslearning. No part of this courseware or any training material supplied by gtslearning International Limited to.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Computing Resources for the AIACE Experiment Raffaella De Vita INFN – Sezione di Genova The AIACE experiment On-Site Computing (JLab) Off-Site Computing.
TYPES OFF OPERATING SYSTEM
Example of DAQ Trigger issues for the SoLID experiment
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Web Server Administration
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Presentation transcript:

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Off-line Computing for the AIACE and HERMES Experiments Physics Motivation: – the Hermes experiment – the AIACE experiment The Offline Tree The Offline Computing Facility (LNF) Summary and Conclusions Federico Ronchetti INFN, GRUPPO III Laboratori Nazionali di Frascati

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 E.M. Nuclear Physics AIACE and HERMES: probing nuclear and sub- nuclear structure via (real or virtual) photon interaction. Advantages: very well known and flexible probe. Disadvantage: the electro-magnetic interaction is a relatively weak probe (i.e. low statistics). In recent years: –availability of high-current (100  A) high duty-cycle (100%) electron accelerators –EM processes can be studied with a statistical accuracy comparable to strong processes.

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Detector Complexity Older experiments: –simple detector electronics ( few 10 2 ch) –simple software development (VME/FORTRAN) and analysis (HBOOK/PAW/GEANT3) –low data rates (few 100 kB/s) Nowadays: –much larger detector complexity is (20÷40×10 4 ch) –more mature software development (UNIX/LINUX, CVS, C/C++) Java (Slow Control) and analysis tools (ROOT/GEANT4) –higher data rates ( few 10 MB/s) Adequate network and computing support: –ONLINE: most of the requirements are covered using commercial/industrial electronics; –OFFLINE: medium-low cost computing and open source software.

TOF DC CC TORUS The Hall B The CLAS detector The CLAS CLAS Highlights 6 GeV maximum energy fixed target e  N 25 MB/s ( L = cm -2 s -1 ) Accelerator current  torodial field (2.5 T m) electronic channles BEAM ECAL

The HERMES Detector HERMES Highlights: 27.5 GeV electron beam HERA Collider : e+  p L = 10  cm  s  Solenoidal magnetic field Maximum current I = 40  A electronic channels

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 CLAS Data Acquisition Detector READOUT – FASTBUS ADC/TDCs – VME/FASTBUS PowerPC controllers (ROC) – VxWorks real time OS – 100 Mbit/s Ethernet RunControl – Sun Ultra Sparc Cluster – Cross compilation of VxWorks readout code – ROC download Event Distribution and Building – Online Monitoring – Event Display – Disk Storage (RAID) – Tape Storage (SILO)

The Offline Tree JLAB - DATA COOKING RAW data files (~ 5 TB/Exp) Calibration Event Reconstruction – ADC/TDC  energy, time – 4-momentum determination Data Reduction – Skimmed files LNF - GEANT SIMULATION & DATA ANALYSIS RAW simulation files (~ 200 GB) – Event Generation – Post Processing – Reconstructions Skimmed data (~ 500 GB) – Analysis – N-tuple production

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Offline LNF Offline analysis: relatively high volume of data: 1 ~ TB (L=10 30 cm -2 s -1 ) Number of staff scientist/users involved in the offline analysis is of the order of 20 System administration/librarian responsibility belong to a single scientist / technologist not devoting 100% of his/her time to this particular task. Running an efficient offline computing facility requires: medium term investments planning decoupling of different issues, as: –network (optimize infrastructure) –storage: disk and tape (manageability, reliability) –computing power –terminal access (security, ease of maintenance)

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Network Infrastructure First issue: overcome limitations in the local network bandwidth. Just few years ago: offline analysis bandwidth limited to 10 Mbit/s. Upgrade of the building switch to deliver 100 Mbit/s speed to all user workstations and gigabit speeds to the analysis machines: –CISCO Catalyst 6000: 12 full duplex Gigabit ports This upgrade allowed us to deploy an efficient facility for “medium scale” offline computing.

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 NAS (Network Attached Storage): –Reliability: implements hardware RAID disk array with hot spares, directory snapshot and file system recovery capability. –Fast: the OS kernel only knows disk, memory and network I/O –Versatile: exports can be accessed via NFS or other common protocols (i.e. NETBEUI/AppleTalk) A Procom Netforce 1750 is operating in the nuclear physics building since 1 year: –1.5 TB with SCSI RAID 5 support (2.2 TB by the end of the year) –Single PIII CPU, 1 GB RAM –Custom OS based on Free BSD –100/1000 MBit/s multiple network interfaces –1 GB LTO Autoloader –Remote (http/telnet) management Storage

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Linux Cluster AIACE/HERMES: –Linux (RHL/SUSE) based offline software distributions –data analysis and simulation Core Linux analysis machines –6 dual 1U 2.4 GHz PIV Xeon (12 CPUs) rack-mounted nodes –1 GB RAM DDR EEC –dual Gigabit NIC –hot removable 2x32 GB SCSI local disks (system installation and file cache) –clustering software: MOSIX and/or DQS (installation underway) –mission-critical data and programs reside on the NAS

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Server Based Computing User access to clustered computing resources: –past years: X-terminals –nowadays: Windows PCs (large administrative burden and large fraction of task replication) A possible solution: Server Based Computing –2 dual 1U 2.4 GHz PIV Xeon (4 CPUs) rack-mounted nodes 1 GB RAM (analogous to the Linux analysis nodes) –Windows 2000 Advanced Server –Terminal Services (Remote Desktop Display) –Transparent file sharing via NAS.

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Thin Clients Linux based thin clients: display X, Windows, and local Web applications. Native X support FLASH resident ssh, RDP, ICA and telnet applications: Only the server requires full application installation Thin clients are diskless and can be managed remotely RDP clients can be easily installed used on pre-existing PCs in order to reconvert them into thin clients

CAT Putting All Together CAT.6000 SWITCH PROCOM 1750 (NAS) LINUX Master Node Farm nodes WIN2K AD. SERVER PC Thin Client WI-FI applications

Workshop sulle problematiche di calcolo e reti nell'INFN, Paestum, 12 Giugno 2003 Outlook and Conclusions The new generation of nuclear physics experiments at intermediate energies –Increase in detector complexity and statistical accuracy. –Increase in the number of final users working on the analysis Demand for: –more structured software development –medium sized computing facilities –clustered computing (Linux) –distributed applications (thin clients) A new approach and mentality needs to be developed by the larger groups operating in nuclear physics.