Amedeo Perazzo Online Computing November 12 th, 20081 Computing Resources for DAQ & Online Amedeo Perazzo Photon Controls.

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Network Upgrades { } Networks Projects Briefing 12/17/03 Phil DeMar; Donna Lamore Data Comm. Group Leaders.
Highly Available Central Services An Intelligent Router Approach Thomas Finnern Thorsten Witt DESY/IT.
Networking in Virtual Environments Virtualization – Why do I care? Technical components of virtualization Networking in a virtual world What is cloud computing?
A U.S. Department of Energy Office of Science Laboratory Operated by The University of Chicago Argonne National Laboratory Office of Science U.S. Department.
Presented by Serge Kpan LTEC Network Systems Administration 1.
Gunther Haller Parallel: X-Ray Controls & DAQ LCLS FAC Meeting 30 October Experimental Area Controls and Data-Acquisition.
1 13-Jun-15 S Ward Abingdon and Witney College LAN design CCNA Exploration Semester 3 Chapter 1.
An ATCA and FPGA-Based Data Processing Unit for PANDA Experiment H.XU, Z.-A. LIU,Q.WANG, D.JIN, Inst. High Energy Physics, Beijing, W. Kühn, J. Lang, S.
Gunther Haller Plenary: X-Ray Controls & Data LCLS FAC Meeting 29 October Experimental Area Controls and Data-Acquisition.
Terri Lahey LCLS Facility Advisory Committee 20 April 2006 LCLS Network Security Terri Lahey.
Terri Lahey LCLS FAC: Update on Security Issues 12 Nov 2008 SLAC National Accelerator Laboratory 1 Update on Security Issues LCLS.
Mr. Mark Welton.  Three-tiered Architecture  Collapsed core – no distribution  Collapsed core – no distribution or access.
Dale E. Gary Professor, Physics, Center for Solar-Terrestrial Research New Jersey Institute of Technology 1 09/24/2012Prototype Review Meeting.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
Terri Lahey EPICS Collaboration Meeting June June 2006 LCLS Network & Support Planning Terri Lahey.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Tier 3g Infrastructure Doug Benjamin Duke University.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
1 KFUPM Enterprise Network Sadiq M. Sait
Particle Physics & Astrophysics Representing: Mark Freytag Gunther Haller Ryan Herbst Michael Huffer Chris O’Grady Amedeo Perazzo Leonid Sapozhnikov Eric.
+ discussion in Software WG: Monte Carlo production on the Grid + discussion in TDAQ WG: Dedicated server for online services + experts meeting (Thusday.
Objective  CEO of a small company  Create a small office network  $10,000 and $20,000 Budget  Three servers (workstations)  Firewall device  Switch.
Module 4: Planning, Optimizing, and Troubleshooting DHCP
Storage Systems Market Analysis Dec 04. Storage Market & Technologies.
SLAC Particle Physics & Astrophysics The Cluster Interconnect Module (CIM) – Networking RCEs RCE Training Workshop Matt Weaver,
Stephen Dart LaRDS Service Manager Monash e-Research Centre LaRDS Staging Post Enhancing Workgroup Productivity.
NETWORKING COMPONENTS AN OVERVIEW OF COMMONLY USED HARDWARE Christopher Johnson LTEC 4550.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
LAN Switching and Wireless – Chapter 1
Networking Components By: Timothy O’Grady. Ethernet Hub Ethernet hubs link PC’s and peripherals and allow them to communicate over networks. Data transferring.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
Example: Sorting on Distributed Computing Environment Apr 20,
Securing and Monitoring 10GbE WAN Links Steven Carter Center for Computational Sciences Oak Ridge National Laboratory.
ASTA roadmap to EPICS and other upgrades Remote monitoring and control DAQ Machine protection Dark current energy spectrometer.
VolNet2 Bill White Network Services. September 20, 2004OIT Fall Staff Meeting Why Volnet2? Based on the Security Assessment findings Insecure protocols.
© 2008 Cisco Systems, Inc. All rights reserved.Cisco ConfidentialPresentation_ID 1 Chapter 1: Introduction to Scaling Networks Scaling Networks.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
DETER Testbed Status Kevin Lahey (ISI) Anthony D. Joseph (UCB) January 31, 2006.
Terri Lahey Control System Cyber-Security Workshop October 14, SLAC Controls Security Overview Introduction SLAC has multiple.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Infrastructure for the LHCb RTTC Artur Barczyk CERN/PH RTTC meeting,
The DCS lab. Computer infrastructure Peter Chochula.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
Access Method. “ ” A key is usually intended to operate one specific lock or a small number of locks that are keyed alike, so each lock requires a unique.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Gunther Haller SiD Meeting January 30, 08 1 Electronics Systems Discussion presented by Gunther Haller Research Engineering.
Advanced Computer Networks Lecturer: E EE Eng. Ahmed Hemaid Office: I 114.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
BEIJING-LCG Network Yan Xiaofei
R. Krempaska, October, 2013 Wir schaffen Wissen – heute für morgen Controls Security at PSI Current Status R. Krempaska, A. Bertrand, C. Higgs, R. Kapeller,
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
Network Move & Upgrade 2008 Les Cottrell SLAC for SCCS core services group Presented at the OU Admin Group Meeting August 21,
Infrastructure for the DBA: An Introduction Peter Shore SQL Saturday Chicago 2016.
Integrated Control System IT Infrastructure decisions, issues & progress TAC 12 Remy Mudingay ESS/ICS Date:
© 2003, Cisco Systems, Inc. All rights reserved. 2-1 Campus Network Design.
Australian Institute of Marine Science Jukka Pirhonen.
Dirk Zimoch, EPICS Collaboration Meeting October SLS Beamline Networks and Data Storage.
BaBar Transition: Computing/Monitoring
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Module 2: DriveScale architecture and components
SNS Status Report Karen S. White 10/15/08.
Computing Infrastructure for DAQ, DM and SC
LUSI Controls and Data Systems W.B.S. 1.6
LUSI Controls and Data Systems W.B.S. 1.6
Presentation transcript:

Amedeo Perazzo Online Computing November 12 th, Computing Resources for DAQ & Online Amedeo Perazzo Photon Controls and Data Systems Online Manager SLAC, November 12 th 2008 LCLS FAC - Controls Breakout Session

Amedeo Perazzo Online Computing November 12 th, Contents Network organization Services Machines which provide services to user workstations, DAQ & control nodes Users Development, internet access DAQ Machines used for moving data from FEE to online data cache Controls Machines used for controlling all slow devices Data Cache Temporary storage for online data analysis and as buffer between DAQ and offline

Amedeo Perazzo Online Computing November 12 th, Network Zones Photon Controls & Data Systems (PCDS) network organized in 2 zones Back-end: provides networking services to the PCDS enclave Front-end: control and data acquisition traffic Network organization driven by new DOE security rules Back End Zone: provides the infrastructure for all service traffic Divided into six subnets: dmz : limited access from SLAC machines usr : user development and Internet access nodes service : NFS, DNS, NTP and AAA server nodes mgmt : utility nodes (terminal servers, shelf managers, etc...)‏ cds : service subnet for Control & DAQ nodes dss : service subnet for the Data Storage machines Must allow Control & DAQ to be operational, for limited amount of time, when connection to SLAC domain is down

Amedeo Perazzo Online Computing November 12 th, SLAC Domain Back End Zone Diagram DSS Service DMZ User Science bulk data CDS Service Traffic Data cache machines Control & DAQ nodes NFS, DNS, NTP, AAA Front End Zone Accelerator Domain MGMT

Amedeo Perazzo Online Computing November 12 th, Front End Zone Front End Zone: provides the infrastructure for the control traffic and the data acquisition traffic Divided into three networks: daq : science data, partition management, run monitoring and telemetry traffic DAQ operator consoles (L0), readout nodes (L1), processing nodes (L2) and data cache machines (L3)‏ epics : control traffic control operator consoles (E0), IOCs (E1), EPICS archiver (E3) and channel access gateway bld : low latency 120 Hz beam-line data traffic DAQ network further subdivided into experiment specific subnets: daq_amo, daq_xpp, daq_cxi, etc EPICS network further subdivided into: epics_amo, epics_xpp, epics_mcc, epics_xtod, etc Selected EPICS traffic may be exchanged between the different subnets via the channel access gateway

Amedeo Perazzo Online Computing November 12 th, Network Devices for Services‏ CISCO Catalyst Gbps switch fabric backplane 9 slots (currently: 1 supervisor, 1 x 48 1Gb RJ45, 1 x 24 SFPs)‏ (+)‏

Amedeo Perazzo Online Computing November 12 th, Controls‏ CISCO Gb RJ SFPs (+)‏ Motorola MVME6100 MPC7457 PPC 1.2 GHz (+)‏ DELL R200 1U server 2 cores, 1 PCIe, 1 PCI-X (+)‏ ELMA VME chassis Redundant PWS, shelf manager 4, 8 and 21 slots (+)‏

Amedeo Perazzo Online Computing November 12 th, Network Devices for DAQ Cluster Interconnect Module (CIM) SLAC custom made ATCA switch Based on two 24-port 10Gb Ethernet switch ASICs from Fulcrum Up to 480 Gb/s total bandwidth Fully managed layer-2, cut-through switch (+)‏ ATCA chassis 5 slots, shelf manager (+)‏ 14 slots, shelf manager

Amedeo Perazzo Online Computing November 12 th, DAQ Nodes SLAC ATCA RCE boards 2 cores, 8 PGP lanes, 2x10Gb/s ethernet fabric interface, 2x1Gb/s ethernet base interface (+)‏ ATCA blades 8 cores, 16GB, dual 10Gb/s ethernet fabric interface, dual 1Gb/s ethernet base interface 2 AMC slots cPCI Concurrent PP512 4 cores, 4GB, 3x1Gb/s ethernet interfaces 2 PMC/XMC slots (+)‏

Amedeo Perazzo Online Computing November 12 th, Servers Services NFS, DNS, LDAP, NTP, AAA, logging, etc Supermicro 2U (8 cores, 32GB, 1TB SATA, redundant PWS)‏ Hot spare with automatic failover EPICS CAG, EPICS DB, EPICS archiver, etc Supermicro 2U (8 cores, 32GB, 1TB SATA, 8 NICs, redundant PWS)‏ Hot spare with automatic failover

Amedeo Perazzo Online Computing November 12 th, Data Cache RAID Server Supermicro 4U (8 cores, 32GB, 1TB SATA, 24TB SAS through hardware RAID SAS controller, redundant PWS)‏ In the beginning 2 servers are envisioned (+)‏ When one server is writing, the other is reading

Amedeo Perazzo Online Computing November 12 th, Users Consoles DELL OptiPlex with dual 20” monitor Development DELL Precision with 20” monitor

Amedeo Perazzo Online Computing November 12 th, Public Networks Not part of the PCDS enclave Desktops Part of SLAC network Two CISCO 3750 in NEH telecom room (+)‏ Dedicated redundant 1Gb/s link between telecom room and SCCS Network devices maintained by SCCS Wireless Part of SLAC visitor network 3 access points in basement floor, 3 in sub-basement, 2 in FEE (+)‏ Connected to switch in NEH telecom room Maintained by SCCS

Amedeo Perazzo Online Computing November 12 th, Status Built prototype for NEH online computing in bldg 84 lab Use same routing and firewall rules as for NEH Created/tested set of tools which will be used in NEH for: Maintaining service machines (NFS, NTP, DNS, AAA servers etc)‏ Synchronization with SCCS, backup, patching, logging Maintaining utility nodes (shelf managers, terminal servers, etc)‏ Manage connections inside and outside PCDS enclave Test-stand setup allows small scale testing of feasibility of network plan Will speed up commissioning in NEH Man power for online computing effort during FTE Shared among 2 PCDS people and 2 SCCS people Similar effort expected for 2009 Expenses for 2008 along expectations Enough resources allocated for 2009 Big ticket item L2 event building and processing farm

Amedeo Perazzo Online Computing November 12 th, Computational Alignment  k in k out q “The number currently used to obtain high-resolution structures of specimens prepared as 2D crystals, is estimated to require at least floating-point operations” R. M. Glaeser, J. Struct. Bio. 128, (1999) ‏ Experimental Data (ALS) ‏ Difference of pyramid diffraction patterns 10º apart, Gösta Huldt, U. Uppsala “Computational Alignment” requires large computational power that might only be provided by performing offline analysis?  Save first, and Analyze later? To be investigated

Amedeo Perazzo Online Computing November 12 th, Real-time Processing – Sorting in CXI Diffraction from a single molecule: single LCLS pulse noisy diffraction pattern of unknown orientation Combine 10 5 to 10 7 measurements into 3D dataset: Classify/sortAverageAlignment Reconstruct by Oversampling phase retrieval Miao, Hodgson, Sayre, PNAS 98 (2001) ‏ unknown orientation Gösta Huldt, Abraham Szöke, Janos Hajdu (J.Struct Biol, ERD-047) ‏ The highest achievable resolution is limited by the ability to group patterns of similar orientation Real-time?