Systems and Communications1 CERN Openlab – 10/11 December 2013 ESRF: IT Challenges.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Implementation of a streaming database management system on a Blue Gene architecture for measurement data processing. Erik Zeitler Uppsala data base lab.
1 SESAME as International Research Lab by: Salman Matalgah Synchrotron-light for Experimental Science and Applications in the Middle.
Wayne Lewis Australian Synchrotron Beamline Controls Design and Implementation.
1 The Case for Versatile Storage System NetSysLab The University of British Columbia Samer Al-Kiswany, Abdullah Gharaibeh, Matei Ripeanu.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
5 Nov 2001CGW'01 CrossGrid Testbed Node at ACC CYFRONET AGH Andrzej Ozieblo, Krzysztof Gawel, Marek Pogoda 5 Nov 2001.
Exploiting SCI in the MultiOS management system Ronan Cunniffe Brian Coghlan SCIEurope’ AUG-2000.
1/18/2008CSCI 315 Operating Systems Design1 Computer System Structures Notice: The slides for this lecture have been largely based on those accompanying.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
Amedeo Perazzo Online Computing November 12 th, Computing Resources for DAQ & Online Amedeo Perazzo Photon Controls.
RAMCloud: a Low-Latency Datacenter Storage System John Ousterhout Stanford University
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
UNIT - 1Topic - 2 C OMPUTING E NVIRONMENTS. What is Computing Environment? Computing Environment explains how a collection of computers will process and.
Chapter 23 Physical Science
Server Systems Administration. Types of Servers Small Servers –Usually are PCs –Need a PC Server Operating System (SOS) such as Microsoft Windows Server,
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. LogKV: Exploiting Key-Value.
1.1 Operating System Concepts Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
Fermi National Accelerator Laboratory SC2006 Fermilab Data Movement & Storage Multi-Petabyte tertiary automated tape store for world- wide HEP and other.
Commodity-SC Workshop, Mar00 Cluster-based Visualization Dino Pavlakos Sandia National Laboratories Albuquerque, New Mexico.
Large Scale Parallel File System and Cluster Management ICT, CAS.
WAVES Chapter 12 Physics Waves transfer what? ENERGY!!!!
Status of the Spanish Light Source David Beltrán.
Proposed Server Infrastructure for the EGIS Initiative.
Fast Crash Recovery in RAMCloud. Motivation The role of DRAM has been increasing – Facebook used 150TB of DRAM For 200TB of disk storage However, there.
Where do photons go When they come out of the beamline and hit my crystal.
Introduction to Synchrotron Radiation
TATII ITS Network (Fiber ) Portal Server Fourth Avenue Building Database Server Dual Sparc SAN (RAID) 1.2 TB Direct Connection backup_tables raw_data_files.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
6 May 2014 CERN openlab IT Challenges workshop, Kacper Szkudlarek, CERN Manuel.
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
High Speed Detectors at Diamond Nick Rees. A few words about HDF5 PSI and Dectris held a workshop in May 2012 which identified issues with HDF5: –HDF5.
Snapshot of DAQ challenges for Diamond Martin Walsh.
Computer Science and Engineering - University of Notre Dame Jimmy Neutron CSE 40827/60827 – Ubiquitous Computing December 9, 2009 Project Presentation.
Operating System. Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered.
Macromolecular Crystallography Workshop 2004 Recent developments regarding our Computer Environment, Remote Access and Backup Options.
KM 3 Neutrino Telescope European deep-sea research infrastructure DANS – symposium Maarten de Jong.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
Status of NICE/NT at INFN Gian Piero Siroli, Physics Dept. Univ. of Bologna and INFN HEPiX-HEPNT, SLAC, Oct.99.
Australian Synchrotron Data curatorship for protein crystallography Julian Adams & Richard Farnsworth.
XRD data analysis software development. Outline  Background  Reasons for change  Conversion challenges  Status 2.
CRISP WP18, High-speed data recording Krzysztof Wrona, European XFEL PSI, 18 March 2013.
ALPAO ACEfast RTC Armin Schimpf, Mickael Micallef, Julien Charton RTC Workshop Observatoire de Paris, 26/01/2016.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Component 8/Unit 1bHealth IT Workforce Curriculum Version 1.0 Fall Installation and Maintenance of Health IT Systems Unit 1b Elements of a Typical.
Dr Andrew Peter Hammersley ESRF ESRF MX COMPUTIONAL AND NETWORK CAPABILITIES (Note: This is not my field, so detailed questions will have to be relayed.
Tango Meeting ONERA, PiLC Logic Controller ADQ412 Digitizer Diffractometer in Sardana GPFS storage system Experiment Control Upgrades at DESY.
29/05/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
Red, Green, Black & Blue: Western Digital Hard Drives Explained | ☎ (+1) | ☎ (+1)
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Pasquale Migliozzi INFN Napoli
LHC experiments Requirements and Concepts ALICE
Introduction to Synchrotron Radiation
ALICE – First paper.
UK GridPP Tier-1/A Centre at CLRC
Computing Infrastructure for DAQ, DM and SC
הכרת המחשב האישי PC - Personal Computer
JDAT Production Hardware
Comparison of LAN, MAN, WAN
عنوان: روش پراکندگی نور دینامیکی برای مطالعه اندازه نانوذرات
ورود اطلاعات بصورت غيربرخط
Presentation transcript:

Systems and Communications1 CERN Openlab – 10/11 December 2013 ESRF: IT Challenges

Systems and Communications2 ESRF Beamlines X-ray standing waves anomalous scattering surface diffraction gamma spectroscopy dultra-fast scattering/ high pressure materials science circular polarisation microbeam protein crystallography inelastic scattering medical nuclear spectroscopy high energy absorption spectroscopy powder diffraction spectroscopy on ultra diluted samplessi protein crystallographye

Systems and Communications3 What is a Beamline?

Systems and Communications4 IT General architecture Central storage Infrastructure servers Network Tape Backup Compute Clusters Detectors

Systems and Communications5 How much do we store each day?

Systems and Communications6 Challenges ESRF Phase II (higher photon flux) starts in 2015 Already, two Eiger detectors arrive at ESRF in Fall 2014  4 Mpixel x 3 bytes = 12MB/image  750Hz = 750*12MB = 9GB/s  Local compression => 1GB/s minimum  10% duty cycle => 9TB/day Limiting factors:  Human resources – data sorting and exploitation  Data Analysis – online  Data Analysis – offline  Speed of the Central storage

Systems and Communications7 Ongoing works LBS only for Linux-based detectors DSS time consuming but MB/s per unit HSN High-Speed Network 40Gbps 10Gbps Central storage Infrastructure servers Network Tape Backup Compute Clusters Detectors Local Buffer Storage (LBS) Dedicated Storage Server (DSS)

Systems and Communications8 Requirements for one Beamline ONLINE DATA ANALYSIS (CPU+GPU) EXPORT STORAGE BACKUP ARCHIVE BUFFER STORAGE BUFFER STORAGE LONG TERM STORAGE LONG TERM STORAGE OFFLINE DATA ANALYSIS (CPU+GPU) Etc.

Systems and Communications9 Overall requirements Typically 6 – 10 Beamlines simultaneously Typically 6 high-speed detectors per Beamline Typically 2 detectors operating simultaneously per Beamline Peak bandwidth per Beamline = 1 – 10 Gbytes/sec

Systems and Communications10 Overall architecture, with one Beamline BUFFER STORAGE Windows DETECTOR PC > 1 GB/ REALTIME ONLINE DATA ANALYSIS (CPU+GPU) VISUALISATION LONG TERM STORAGE + OFFLINE DATA ANALYSIS (CPU+GPU) BACKUP ARCHIVE USER EXPORT GREEN = DEDICATED TO A BEAMLINE Buffer Storage boxes may be either dedicated to a detector or shared by several detectors Linux DETECTOR PC Linux DETECTOR PC BUFFER STORAGE BUFFER STORAGE >1 GB/s BLUE = SHARED BETWEEN BEAMLINES

Systems and Communications11 Other challenges Data Confidentiality

Systems and Communications12 Questions?