Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain.

Similar presentations


Presentation on theme: "Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain."— Presentation transcript:

1 Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain ASPERA Computing and Astroparticle Physics Meeting Paris 8 February 2008

2 J. Rico (ICREA & IFAE) Computing at MAGIC Summary Introduction: VHE  -ray astronomy and MAGIC Data handling at MAGIC GRID at MAGIC Virtual observatory Conclusions

3 J. Rico (ICREA & IFAE) Computing at MAGIC VHE astronomy SNRs  QSRs Dark Matter Pulsars GRBs Quantum Gravity Cosmology AGNs Origin of CRs MAGIC is a Cherenkov telescope (system) devoted to the study of the most energetic electromagnetic radiation, i.e. very high energy (VHE, E > 100 GeV)  -rays VHE  -rays are produced in non-thermal violent processes in the most extreme environments in our Universe Astrophysics of latest stellar stages, AGN’s, GRB’s Fundamental physics

4 J. Rico (ICREA & IFAE) Computing at MAGIC MAGIC is currently the largest-dish Cherenkov telescope in operation (17 m diameter) Located in the Observatorio del Roque de los Muchachos on Canary Island of La Palma (Spain) Run by international Collaboration of ~150 physicists from Germany, Spain, Italy, Switzerland, Poland, Armenia, Finland, Bulgaria In operation since fall 2004 (about to finish 3 rd observation cycle) 2 nd telescope (MAGIC-II) to be inaugurated on September 21 st 2008 MAGIC

5 J. Rico (ICREA & IFAE) Computing at MAGIC Scientific highlights MAGIC has discovered 10 new VHE  -ray sources (7 extragalactic + 3 galactic) New populations unveiled (radio-quasar, micro-quasar) Farthest ever observed object at these energies (z = 0.54) GRBs observed (not detected) during the prompt emission Big flares used to test Lorentz invariance So far, we have published ~30 papers and many more are in the pipeline

6 J. Rico (ICREA & IFAE) Computing at MAGIC IMAGING A segmented PMT camera (577/1039 channels for the first/second telescopes) allows to image Cherenkov showers

7 J. Rico (ICREA & IFAE) Computing at MAGIC Raw data volume Event rate: R Number of camera pixels: n Digitization samples: s Precision: p Data volume rate = R × n × s × p PhaseR (Hz)nsp (bits)1 hour1 day1 year 130057730818 GB150 GB20 TB 2300577801062 GB500 GB75 TB 330016168010/12175 GB1400 GB210 TB 1: One telescope, 300 MHz digitization system: Oct 2004-Dec 2006 2: One telescope 2 GHz digitization system: Jan 2007-Sep 2008 3: Two telescopes 2 GHz digitization system: Sep 2008

8 J. Rico (ICREA & IFAE) Computing at MAGIC Data flow DAQ Fast analysis Raw data Reduced data La Palma Calibrate FTP/Mail FTP Reduce Raw data 200 TB/yr Reduced data 2 TB/yr PIC, Barcelona Reduced data User Calib. Data 20 TB/yr Starting in September 2008 MAGIC data center hosted at PIC, Barcelona (tier-I center) In test-phase since 1 year already Provides: automatic data transfer from La Palma tape storage of raw data automatic data analysis access to latest year calibrated and reduced data cpu and disk buffer for data analysis database

9 J. Rico (ICREA & IFAE) Computing at MAGIC MAGIC/PIC Data center UseSize 1 yr reduced data3 TB 1 yr calib. data21 TB Buffer data processing21 TB Buffer tape/disk I/O21 TB Users’ buffer6 TB Total72 TB DATA center disk needs: + unlimited tape storage capacity Already running system consists of: 25 TByte disk space (ramp-up to final 72 TByte foreseen for next September within schedule) LTO3/LTO4 tape storage and I/O with robots ~20 × CPU (2 GHz) for data processing and analysis automatization of data tranfer/process/analysis database WEB access

10 J. Rico (ICREA & IFAE) Computing at MAGIC Trends foreseen for 2008 Philosophy: Adopt Grid to allow MAGIC users to do better science “If it ain’t broken, don’t fix it” Leverage worldwide mutual trust agreement for Grid certificates to simplify user ID management for: Interactive login (ssh → gsi-ssh or equivalent) Casual file transfer (https via Apache+mod_gridsite or gridftp) Move to batch submission via Grid tools in order to unify CPU accounting with LHC Setup the Grid utility “reliable File Transfer Service” to automate file distribution between MAGIC Datacenter @ PIC and sites which regularly subscribe many datasets PIC/IFAE will have specific resources to help with this, partially thanks to funding from the EGEE-III project Integrate into the procedure for opening an account in the Datacenter the additional steps for a user to get a Grid certificate and to be included as member of the MAGIC VO.

11 J. Rico (ICREA & IFAE) Computing at MAGIC Monte Carlo simulation The recorded data are mainly background events due to charged cosmic rays (CR) Background rejection needs large samples of Monte Carlo simulated  -ray and CR showers Very CPU consuming (1 night of background > 10 6 computer days ) Access to simulated samples, MC production coordination, scalability (MAGIC II, CTA...) GRID can help with these issues

12 J. Rico (ICREA & IFAE) Computing at MAGIC H. Kornmayer (Karlsruhe) proposed following scheme MAGIC Virtual Organization created within EGEE-II Involves three national Grid centers CNAF (Bologna) PIC (Barcelona) GridKA (Karlsruhe) Connect MAGIC resources to enable collaboration 2 subsystems MC (Monte Carlo) Analysis Start with MC first The idea

13 J. Rico (ICREA & IFAE) Computing at MAGIC I need 1.5 million hadronic showers with Energy E, Direction (theta, phi),... As background sample for observation of „Crab nebula“ Run Magic MonteCarlo Simulation and register output data Run Magic Monte Carlo Simulation (MMCS) and register output data Simulate the Telescope Geometry with the reflector program for all interesting MMCS files and register output data Simulate the Starlight Background for a given position in the sky and register output data Simulate the response of the MAGIC camera for all interesting reflector files and register output data Merge the shower simulation and the StarLight simulation and produce a MonteCarlo data sample MC Workflow

14 J. Rico (ICREA & IFAE) Computing at MAGIC Implementation 3 main components: meta data base bookkeeping of the requests, their jobs and the data Requestor user define the parameters by inserting the request to the meta data base Executor creates Grid jobs by checking the metadatabase frequently (cron) and generating the input files

15 J. Rico (ICREA & IFAE) Computing at MAGIC Status MC production Last data challenge (from September 2005) produced ~15000 simulated  - ray showers, ~4% failures After that H. Kornmayer left the project, has been stalled since. New crew taking over the VO (UCM Madrid + INSA + Dortmund). Plan to start producing MC for MAGIC-II soon

16 J. Rico (ICREA & IFAE) Computing at MAGIC Virtual observatory MAGIC will share data with other experiments (GLAST, Veritas, HESS... More?) There might be some time reserved for external observers (from experiment to observatory) In general, MAGIC results should be more accessible to the astrophysics community MAGIC will release data at PIC datacenter using GRID technology in FITS format Step by step approach: Published data (skymaps, light- curves, spectra, …) → imminent Data shared with other experiments (GLAST) → soon Data for external observers → mid-term A standard format has to be defined (other experiments, future CTA) Eventually integrated within a Virtual Observatory (under investigation) Crab Nebula September 2006 MAGIC

17 J. Rico (ICREA & IFAE) Computing at MAGIC MAGIC-GRID Architectural Design proposal The Server application creates Grid template files that are sent to each of the available Grid resources. MAGIC EXECUTOR (SERVICE) GRID The workflow is executed in the available Grid nodes within the MAGIC Virtual Organization. The products are stored in a Data Product Storage unit. Meta Data Database Bookkeeping of the requests, their jobs and the data GRID JOB TEMPLATE The Server creates the template using the Middleware (gLite, LCG) and submits the jobs to the GRID for execution. MAGIC Request (VOTable). SOAP Message (*) MAGIC Submitted Job status information and results Information (*) SOAP Message: Simple Object Access Protocol VOTable: XML standard for interchange of data represented as a set of tables (http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.html)http://www.ivoa.net/Documents/REC/VOTable/VOTable-20040811.htmlSOFTWARE - MMCS - Reflector - Camera MAGIC REQUESTOR (CLIENT) The user specifies the parameters of a particular job request through an interface. DATA PRODUCT STORAGE UNIT VIRTUAL OBSERVATORY ACCESSIBILITY The MAGIC Executor gets notified when the jobs have finished. The application will be designed to send the output data to persistent layer compliant with the emerging VOSpace protocol. (To Be Implemented). VO TOOLS The MAGIC Requestor should allow the interaction with VO applications to be as close as possible with the new emerging astronomical applications.

18 J. Rico (ICREA & IFAE) Computing at MAGIC Summary MAGIC scientific program requires large computing power and storage capacity Data center at PIC/IFAE (Barcelona) is up and start official operation in September 2008 with MAGIC-II Massive MC production for MAGIC-II will involve GRID (Some) Data will be release through virtual observatory Good benchmark for other astro-particle present and future projects


Download ppt "Computing at MAGIC: present and future Javier Rico Institució Catalana de Recerca I Estudis Avançats & Institut de Física d’Altes Energies Barcelona, Spain."

Similar presentations


Ads by Google