Development of the System remote access real time (SRART) at JINR for monitoring and quality assessment of data from the ATLAS LHC experiment (Concept.

Slides:



Advertisements
Similar presentations
Deploying GMP Applications Scott Fry, Director of Professional Services.
Advertisements

High Performance Computing Course Notes Grid Computing.
Information Systems and Data Acquisition for ATLAS What was achievedWhat is proposedTasks Database Access DCS TDAQ Athena ConditionsDB Time varying data.
Secure Off Site Backup at CERN Katrine Aam Svendsen.
VC Sept 2005Jean-Sébastien Graulich Report on DAQ Workshop Jean-Sebastien Graulich, Univ. Genève o Introduction o Monitoring and Control o Detector DAQ.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
L. Taylor 2 March CMS Centres Worldwide See paper (attached to agenda) “How to create a CMS My Institute” which addresses the questions:
Client/Server Architectures
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Assessment of Core Services provided to USLHC by OSG.
J. Patrick - Major Challenges/MOPA021 – a new Remote Operations Center at Fermilab J. Patrick, et al Fermilab.
Deploying and Managing Windows Server 2012
Screen Snapshot Service Kurt Biery SiTracker Monitoring Meeting, 23-Jan-2007.
1 port BOSS on Wenjing Wu (IHEP-CC)
Brochure: Lucas Taylor CHEP 2009, Prague1CMS Centres Worldwide : A New Collaborative.
J. Patrick - Major Challenges/MOPA021 – a new Remote Operations Center at Fermilab J. Patrick, et al Fermilab.
Active Monitoring in GRID environments using Mobile Agent technology Orazio Tomarchio Andrea Calvagna Dipartimento di Ingegneria Informatica e delle Telecomunicazioni.
Requirements Review – July 21, Requirements for CMS Patricia McBride July 21, 2005.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
1 Control Room Model 4 Types of control rooms  Field Control Room (FCR): for example operating near the equipment in USA15. Expected to be needed throughout.
Module 5 A system where in its parts perform a unified job of receiving inputs, processes the information and transforms the information into a new kind.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
An Approach To Automate a Process of Detecting Unauthorised Accesses M. Chmielewski, A. Gowdiak, N. Meyer, T. Ostwald, M. Stroiński
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
André Augustinus 10 October 2005 ALICE Detector Control Status Report A. Augustinus, P. Chochula, G. De Cataldo, L. Jirdén, S. Popescu the DCS team, ALICE.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
1 Planning for Reuse (based on some ideas currently being discussed in LHCb ) m Obstacles to reuse m Process for reuse m Project organisation for reuse.
CERN openlab V Technical Strategy Fons Rademakers CERN openlab CTO.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Virtualization for the LHCb Online system CHEP Taipei Dedicato a Zio Renato Enrico Bonaccorsi, (CERN)
MIS 105 LECTURE 1 INTRODUCTION TO COMPUTER HARDWARE CHAPTER REFERENCE- CHP. 1.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
Summary Collaborative tools track Eva Hladká Masaryk University & CESNET Czech Republic.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
Online Monitoring for the CDF Run II Experiment T.Arisawa, D.Hirschbuehl, K.Ikado, K.Maeshima, H.Stadie, G.Veramendi, W.Wagner, H.Wenzel, M.Worcester MAR.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
Construction of Computational Segment at TSU HEPI Erekle Magradze Zurab Modebadze.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Computing at SSRL: Experimental User Support Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
LHC Computing, CERN, & Federated Identities
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
11 th February 2008Brian Martlew EPICS for MICE Status of the MICE slow control system Brian Martlew STFC, Daresbury Laboratory.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Ian Bird CERN, 17 th July 2013 July 17, 2013
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks NA3 Activity – Training and Induction Robin.
R. Krempaska, October, 2013 Wir schaffen Wissen – heute für morgen Controls Security at PSI Current Status R. Krempaska, A. Bertrand, C. Higgs, R. Kapeller,
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
JINR-CERN 5-years Partnership Program N.Russakovich 103-rd session of theJINR Scientific Council meeting JINR, Dubna, 21 February 2008.
January 2010 – GEO-ISC KickOff meeting Christian Gräf, AEI 10 m Prototype Team State-of-the-art digital control: Introducing LIGO CDS.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
CD Strategy Session/Briefing on Collaboration Tools: Sept. 12, Collaboration Tools Erik Gottschalk.
Visit to CERN/CMS Jan 2006 Patricia McBride Fermilab Slides taken from presentations by Hans Hoffmann and Werner Jank.
VIRTUAL NETWORK COMPUTING SUBMITTED BY:- Ankur Yadav Ashish Solanki Charu Swaroop Harsha Jain.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
Online remote monitoring facilities for the ATLAS experiment
Database System Concepts and Architecture
LHC experiments Requirements and Concepts ALICE
Enrico Bonaccorsi, (CERN) Loic Brarda, (CERN) Gary Moine, (CERN)
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 2 Database System Concepts and Architecture.
An Introduction to Computer Networking
Delivering Distance Learning Experiments in Local Area Networking
Presentation transcript:

Development of the System remote access real time (SRART) at JINR for monitoring and quality assessment of data from the ATLAS LHC experiment (Concept and architecture of prototype SRART at JINR) Kotov V.M., Rusakovich N.A.

Introduction Current status of large international collaborations in high-energy physics has become essential that collaborators can participate in the operations of experimental facilities from remote locations. To address the needs for remote operations and remote participation in the ATLAS experiment, in this report would be suggest the problem of the construction in System remote access real time (SRART) at JINR Dubna. The purpose of the SRART is to help scientists and engineers JINR working on ATLAS contribute their expertise to commissioning and operations activities at CERN.

SRART at JINR has primary functions: provide a physical location with access to information that is available in control rooms and the operations centers inside Point1 at CERN; communications conduit between CERN and members of the ATLAS community located in Russia and also allows experts from the participating institutes to be involved in the quality assessment of the data during their processing. In accordance with the ATLAS object model, the distributed data quality assessment should be included in all processing levels from on-line data taking to off-line processing by Tier 0-2.

ATLAS Remote Operations For the start of ATLAS operations all control of the ATLAS detector, trigger and data acquisition systems as well as real-time online monitoring of data quality will be the responsibility of a shift crew working in the ATLAS CONTROL ROOM (ACR). The remote operations centers, such as SRART at JINR, are expected to have the same kinds of monitoring as the ATLAS Point1. The ATLAS operations model is expected to evolve, and in the future more and more of the responsibilities may be transferred from the control room to the remote centers. There are many monitoring applications that are being developed by ATLAS collaborators for use in the control room and remote site

Concept for the SRART JINR The concept, architecture, and service composition of the system under development comply with the architecture of the TDAQ ATLAS CERN and this system is a fragment of the entire ATLAS information infrastructure.

Development of the prototype SRART at JINR is based on the experience gained by JINR programmers, physicists, and engineers in design of the data acquisition and processing system TDAQ ATLAS at CERN, which allows a side range of TDAQ services to be used for the development of the SRART at JINR and its integration in the general distributed system for handling ATLAS LHC data. Modern high-energy physics investigations require handling of large amounts of information beginning with on-line data collection and processing for input compression and further in off-line analysis using the GRID technology.

Management of multilevel geographically distributed computation systems which underlie the information GRID infrastructure of high- energy physics experiments requires a distributed control system with a developed system for quality monitoring and assessment of the data from experiments at CERN’s LHC. Being geographically distributed, the processing system should allow remote access to the status of the detectors used in an experiment and remote data quality monitoring for all institutes participating in an experiment. The SRART had always planned for remote data quality monitoring of ATLAS during operations and SRART was interested in providing support for training people before going to CERN, and remote participation in ATLAS studies. We saw an opportunity for JINR scientists and engineers to work together with detector experts to contribute their combined expertise to ATLAS commissioning.

Features With construction the JINR coming into complete operation the following features have been realized: CERN-style consoles with 6 workstations Videoconferencing installed for one console which can be expanded to two Webcams for remote viewing of JINR Secure network for console PCs HD viewing of JINR, Screen Snapshot Service (SSS) as like at Fermilab. [2] This work is in progress to develop the necessary tools.

SSS is an approach to provide a snapshot of a graphical interface to remote users. A “snapshot” is defined as an image copy of a graphical user interface at particular instance in time such as a DAQ system buffer display of operator control program. It is a view-only image, so there is no danger of accidental user input. SSS was initially implemented for desktops, but could be targeted to application GUIs.

Console Machine hardware: Linux box (specs equivalent to those of ACR mashine) Dual or quad core CPU, 2.4 GHz 4 GB RAM and 2 x 500 GB HD Gigabit Ethernet card Displays 1 x 30” 2560x x 19” 1280x1024 Software: CERN Scientific Linux 4 NX Client Athena ROOT Any monitoring software

Acknowledgements Successful completion of this work would not have been possible without the dedicated efforts of the following: the JINR: Kazakov A., Mineev M., Aleksandrov I., Aleksandrov E., Yakovlev A., Korenkov V., Atanasov S., the TDAQ ATLAS Group at CERN: L.Mapelli, Kolos S., Lehman-Miotto G., Soloviev I., M. Caprini.

References: 1.Monitoring, S.Kolos, TDAQ Workshop, Rome, Italy, May Remote Operation for LHC and CMS, Eric Gottschalk, Fermilab, RTO7, May 2007

Backup slides

21 Remote access technology An appropriate software for remote access still have to be chosen: – Plain X11 session is very slow on a long distance – Evaluation has been organized in the Monitoring Working Group to choose the appropriate technology for providing viable desktop environment for remote users: Free NX (GPL) NoMachine commercial NX license Sun Secure Global Desktop – Evaluation report has been published: Conclusion: – Free NX has enough functionalities to be used in the current phase – Final decision will be taken based on the system exploitation experience by the end of summer

22 NX technology A layer above standard X11 Client/Server protocol: NX is a standard which has several implementations available on the market Compresses X11 data Does local caching of X11 information For most of the applications is almost as responsive as a local X11 session NX clients are free for non-commercial use NX servers: – NoMachine sells commercial version – GPL version called free NX also exists

Access policy Limited number of sessions is supported now: – Initially it’s planned to support 16 concurrent sessions, i.e. a session per CPU core Now anybody with valid AFS account can log-in to the system Possible approaches: – Use per-system accounts – Use per-institute accounts – Limit a number of sessions per account