Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

September 13, 2004NVO Summer School1 VO Protocols Overview Tom McGlynn NASA/GSFC T HE US N ATIONAL V IRTUAL O BSERVATORY.
The Australian Virtual Observatory e-Science Meeting School of Physics, March 2003 David Barnes.
MapReduce Online Created by: Rajesh Gadipuuri Modified by: Ying Lu.
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Building a Framework for Data Preservation of Large-Scale Astronomical Data ADASS London, UK September 23-26, 2007 Jeffrey Kantor (LSST Corporation), Ray.
Dr. Harald Kornmayer A distributed, Grid-based analysis system for the MAGIC telescope, CHEP 2004, Interlaken1 H. Kornmayer, IWR, Forschungszentrum Karlsruhe.
Astronomical GRID Applications at ESAC Science Archives and Computer Engineering Unit Science Operations Department ESA/ESAC.
Data Grid Web Services Chip Watson Jie Chen, Ying Chen, Bryan Hess, Walt Akers.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
H. Kornmayer MAGIC-GRID Status report EGAAP meeting, Athens, 21th April EnablingGrids for E-sciencE Benefits of the MAGIC Grid Status report of.
Data Management Kelly Clynes Caitlin Minteer. Agenda Globus Toolkit Basic Data Management Systems Overview of Data Management Data Movement Grid FTP Reliable.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
3 rd DPHEP Workshop CERN, 7-8 December 2009 G. LAMANNA CTA C herenkov Telescope Array Giovanni Lamanna LAPP - Laboratoire d'Annecy-le-Vieux de Physique.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
4th EGEE user forum / OGF 25 Catania, TheoSSA on AstroGrid-D Iliya Nickelt (AIP / GAVO), Thomas Rauch (IAAT / GAVO), Harry Enke (AIP.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
Sep 21, 20101/14 LSST Simulations on OSG Sep 21, 2010 Gabriele Garzoglio for the OSG Task Force on LSST Computing Division, Fermilab Overview OSG Engagement.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Integration of Astro-WISE with Grid storage.
Development of Ideas in Ground-based Gamma-ray Astronomy, Status of Field and Scientific Expectations from HESS, VERITAS, MAGIC and CANGAROO Trevor C.
A PPARC funded project Workflow and Job Control in Astrogrid Jeff Lusted Dept Physics and Astronomy University of Leicester.
Migration of Monte Carlo Simulation of High Energy Atmospheric Showers to GRID Infrastructure Migration of Monte Carlo Simulation of High Energy Atmospheric.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
June 6, 2006 CALOR 2006 E. Hays University of Chicago / Argonne National Lab VERITAS Imaging Calorimetry at Very High Energies.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks CRAB: the CMS tool to allow data analysis.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
Sources emitting gamma-rays observed in the MAGIC field of view Jelena-Kristina Željeznjak , Zagreb.
1 Cherenkov Telescope Array: a production system prototype L. Arrabito 1 C. Barbier 2, J. Bregeon 1, A. Haupt 3, N. Neyroud 2 for the CTA Consortium 1.
H. Kornmayer MAGIC-Grid EGEE, Panel discussion, Pisa, Monte Carlo Production for the MAGIC telescope A generic application of EGEE Towards.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ATLAS Distributed Analysis DISTRIBUTED ANALYSIS JOBS WITH THE ATLAS PRODUCTION SYSTEM S. González D. Liko
CMS Production Management Software Julia Andreeva CERN CHEP conference 2004.
Simulation Production System Science Advisory Committee Meeting UW-Madison March 1 st -2 nd 2007 Juan Carlos Díaz Vélez.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using EGEE middleware: Putting it all together!
1 MAGIC Crab 10% Crab 1% Crab GLAST MAGIC HESS E. F(>E) [TeV/cm 2 s] E [GeV] Cycle 1 of data taking from Mar 2005 to Apr 2006 –~1200 hours, with efficiency.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
CTA Cerenkov Telescope Array G. Lamanna (1), C. Vuerli (2) (1) CNRS/IN2P3/LAPP, Annecy (FR) (2) INAF – Istituto Nazionale di Astrofisica, Trieste (IT)
Accessing the VI-SEEM infrastructure
Simulation Production System
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Overview of the Belle II computing
Data Challenge with the Grid in ATLAS
The MAGIC Data Center storage and computing infrastructures in Grid
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
LHC Data Analysis using a worldwide computing grid
 YongPyong-High Jan We appreciate that you give an opportunity to have this talk. Our Belle II computing group would like to report on.
Gridifying the LHCb Monte Carlo production system
Google Sky.
Status and plans for bookkeeping system and production tools
The LHCb Computing Data Challenge DC06
Presentation transcript:

Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain ASPERA Computing and Astroparticle Physics Meeting Paris 8 February 2008

J. Rico (ICREA & IFAE) Computing at MAGIC Summary Introduction: VHE  -ray astronomy and MAGIC Data handling at MAGIC GRID at MAGIC Virtual observatory Conclusions

J. Rico (ICREA & IFAE) Computing at MAGIC VHE astronomy SNRs  QSRs Dark Matter Pulsars GRBs Quantum Gravity Cosmology AGNs Origin of CRs MAGIC is a Cherenkov telescope (system) devoted to the study of the most energetic electromagnetic radiation, i.e. very high energy (VHE, E > 100 GeV)  -rays VHE  -rays are produced in non-thermal violent processes in the most extreme environments in our Universe Astrophysics of latest stellar stages, AGN’s, GRB’s Fundamental physics

J. Rico (ICREA & IFAE) Computing at MAGIC MAGIC is currently the largest-dish Cherenkov telescope in operation (17 m diameter) Located in the Observatorio del Roque de los Muchachos on Canary Island of La Palma (Spain) Run by international Collaboration of ~150 physicists from Germany, Spain, Italy, Switzerland, Poland, Armenia, Finland, Bulgaria In operation since fall 2004 (about to finish 3 rd observation cycle) 2 nd telescope (MAGIC-II) to be inaugurated on September 21 st 2008 MAGIC

J. Rico (ICREA & IFAE) Computing at MAGIC Scientific highlights MAGIC has discovered 10 new VHE  -ray sources (7 extragalactic + 3 galactic) New populations unveiled (radio-quasar, micro-quasar) Farthest ever observed object at these energies (z = 0.54) GRBs observed (not detected) during the prompt emission Big flares used to test Lorentz invariance So far, we have published ~30 papers and many more are in the pipeline

J. Rico (ICREA & IFAE) Computing at MAGIC IMAGING A segmented PMT camera (577/1039 channels for the first/second telescopes) allows to image Cherenkov showers

J. Rico (ICREA & IFAE) Computing at MAGIC Raw data volume Event rate: R Number of camera pixels: n Digitization samples: s Precision: p Data volume rate = R × n × s × p PhaseR (Hz)nsp (bits)1 hour1 day1 year GB150 GB20 TB GB500 GB75 TB /12175 GB1400 GB210 TB 1: One telescope, 300 MHz digitization system: Oct 2004-Dec : One telescope 2 GHz digitization system: Jan 2007-Sep : Two telescopes 2 GHz digitization system: Sep 2008

J. Rico (ICREA & IFAE) Computing at MAGIC Data flow DAQ Fast analysis Raw data Reduced data La Palma Calibrate FTP/Mail FTP Reduce Raw data 200 TB/yr Reduced data 2 TB/yr PIC, Barcelona Reduced data User Calib. Data 20 TB/yr Starting in September 2008 MAGIC data center hosted at PIC, Barcelona (tier-I center) In test-phase since 1 year already Provides: automatic data transfer from La Palma tape storage of raw data automatic data analysis access to latest year calibrated and reduced data cpu and disk buffer for data analysis database

J. Rico (ICREA & IFAE) Computing at MAGIC MAGIC/PIC Data center UseSize 1 yr reduced data3 TB 1 yr calib. data21 TB Buffer data processing21 TB Buffer tape/disk I/O21 TB Users’ buffer6 TB Total72 TB DATA center disk needs: + unlimited tape storage capacity Already running system consists of: 25 TByte disk space (ramp-up to final 72 TByte foreseen for next September within schedule) LTO3/LTO4 tape storage and I/O with robots ~20 × CPU (2 GHz) for data processing and analysis automatization of data tranfer/process/analysis database WEB access

J. Rico (ICREA & IFAE) Computing at MAGIC Trends foreseen for 2008 Philosophy: Adopt Grid to allow MAGIC users to do better science “If it ain’t broken, don’t fix it” Leverage worldwide mutual trust agreement for Grid certificates to simplify user ID management for: Interactive login (ssh → gsi-ssh or equivalent) Casual file transfer (https via Apache+mod_gridsite or gridftp) Move to batch submission via Grid tools in order to unify CPU accounting with LHC Setup the Grid utility “reliable File Transfer Service” to automate file distribution between MAGIC PIC and sites which regularly subscribe many datasets PIC/IFAE will have specific resources to help with this, partially thanks to funding from the EGEE-III project Integrate into the procedure for opening an account in the Datacenter the additional steps for a user to get a Grid certificate and to be included as member of the MAGIC VO.

J. Rico (ICREA & IFAE) Computing at MAGIC Monte Carlo simulation The recorded data are mainly background events due to charged cosmic rays (CR) Background rejection needs large samples of Monte Carlo simulated  -ray and CR showers Very CPU consuming (1 night of background > 10 6 computer days ) Access to simulated samples, MC production coordination, scalability (MAGIC II, CTA...) GRID can help with these issues

J. Rico (ICREA & IFAE) Computing at MAGIC H. Kornmayer (Karlsruhe) proposed following scheme MAGIC Virtual Organization created within EGEE-II Involves three national Grid centers CNAF (Bologna) PIC (Barcelona) GridKA (Karlsruhe) Connect MAGIC resources to enable collaboration 2 subsystems MC (Monte Carlo) Analysis Start with MC first The idea

J. Rico (ICREA & IFAE) Computing at MAGIC I need 1.5 million hadronic showers with Energy E, Direction (theta, phi),... As background sample for observation of „Crab nebula“ Run Magic MonteCarlo Simulation and register output data Run Magic Monte Carlo Simulation (MMCS) and register output data Simulate the Telescope Geometry with the reflector program for all interesting MMCS files and register output data Simulate the Starlight Background for a given position in the sky and register output data Simulate the response of the MAGIC camera for all interesting reflector files and register output data Merge the shower simulation and the StarLight simulation and produce a MonteCarlo data sample MC Workflow

J. Rico (ICREA & IFAE) Computing at MAGIC Implementation 3 main components: meta data base bookkeeping of the requests, their jobs and the data Requestor user define the parameters by inserting the request to the meta data base Executor creates Grid jobs by checking the metadatabase frequently (cron) and generating the input files

J. Rico (ICREA & IFAE) Computing at MAGIC Status MC production Last data challenge (from September 2005) produced ~15000 simulated  - ray showers, ~4% failures After that H. Kornmayer left the project, has been stalled since. New crew taking over the VO (UCM Madrid + INSA + Dortmund). Plan to start producing MC for MAGIC-II soon

J. Rico (ICREA & IFAE) Computing at MAGIC Virtual observatory MAGIC will share data with other experiments (GLAST, Veritas, HESS... More?) There might be some time reserved for external observers (from experiment to observatory) In general, MAGIC results should be more accessible to the astrophysics community MAGIC will release data at PIC datacenter using GRID technology in FITS format Step by step approach: Published data (skymaps, light- curves, spectra, …) → imminent Data shared with other experiments (GLAST) → soon Data for external observers → mid-term A standard format has to be defined (other experiments, future CTA) Eventually integrated within a Virtual Observatory (under investigation) Crab Nebula September 2006 MAGIC

J. Rico (ICREA & IFAE) Computing at MAGIC MAGIC-GRID Architectural Design proposal The Server application creates Grid template files that are sent to each of the available Grid resources. MAGIC EXECUTOR (SERVICE) GRID The workflow is executed in the available Grid nodes within the MAGIC Virtual Organization. The products are stored in a Data Product Storage unit. Meta Data Database Bookkeeping of the requests, their jobs and the data GRID JOB TEMPLATE The Server creates the template using the Middleware (gLite, LCG) and submits the jobs to the GRID for execution. MAGIC Request (VOTable). SOAP Message (*) MAGIC Submitted Job status information and results Information (*) SOAP Message: Simple Object Access Protocol VOTable: XML standard for interchange of data represented as a set of tables ( - MMCS - Reflector - Camera MAGIC REQUESTOR (CLIENT) The user specifies the parameters of a particular job request through an interface. DATA PRODUCT STORAGE UNIT VIRTUAL OBSERVATORY ACCESSIBILITY The MAGIC Executor gets notified when the jobs have finished. The application will be designed to send the output data to persistent layer compliant with the emerging VOSpace protocol. (To Be Implemented). VO TOOLS The MAGIC Requestor should allow the interaction with VO applications to be as close as possible with the new emerging astronomical applications.

J. Rico (ICREA & IFAE) Computing at MAGIC Summary MAGIC scientific program requires large computing power and storage capacity Data center at PIC/IFAE (Barcelona) is up and start official operation in September 2008 with MAGIC-II Massive MC production for MAGIC-II will involve GRID (Some) Data will be release through virtual observatory Good benchmark for other astro-particle present and future projects