Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.

Similar presentations


Presentation on theme: "Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed."— Presentation transcript:

1 Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed infrastructure of ~150 computing centers in 40 countries. Over 300 thousand of CPU cores (~ 2 million HEP-SPEC-06). The biggest site with ~60 thousand CPU cores, 12 Tier-1 with 2-30 thousand CPU cores. The world’s largest computing grid. During Run 1 (2009 - 2013): 27 PetaBytes (PB) of RAW data from LHC to tape in 2012+13 (p-p and p-Pb). 15 PB in 2010, 22 PB in 2011 (p-p and Pb-Pb). Total archive on CERN tape storage CASTOR ~80 PB of which ~75 PB is the LHC data. Up to 4.6 PB/month rate to tapes. Reliable operations and services through the entire Run 1 period: enabled fast publication of scientific results. Dagmar Adamova, NPI AS CR Prague/Rez

2 Current activities during LS1 Re-processing of data produced in Run 1. Additional simulation productions. Testing use of HLT farms for offline processing as additional Tier sites. ATLAS and CMS use OpenStack (open source cloud software) to manage their farms. Run 2 will bring 2x higher c.o.m. energy and 2x larger pile-up. The anticipated growth of computing resources With constant budget should meet the demands of Run 2, with optimized resource usage. Total delivered luminosity @ p-p energy √s=7/8 Tev in 2011+2012: 28.3 fb -1. Outlook and planning for beyond Run 2: 3000 fb -1 in about 10 yea @ p-p energy √s=14Tev. Anticipated LHC RAW data volume in Run 3 ~130 PB/year, in Run 4 several 100 PB/year. Experiments current computing models do not scale accordingly: updates inevitable. Grid growth of 25%/year is not sufficient. Need to look for additional solutions. RAW + Derived data @ Run 1 Data volumes expectation for HL LHC (in PB) Large data producers

3 Towards Run 3 and beyond Towards Run 3 and beyond To guarantee enough computing resources for Run 3 and beyond, there are activities in various different areas: 1. Update of experiments Computing models - Re-engineer experiment software - Thus optimize use of available resources - Use of HLT farms for offline data processing 2. Simplify Grid middleware layers - Use open source Cloud technologies for job submission and management - Run 2 will see a migration to more Cloud-like models 3. Data management (the key issue) - Working towards transparent distributed data access enabled by efficient networks - Data federations based on Xrootd and/or http - Optimizing data access from jobs: remote access, remote I/O - More intelligent data placement/caching - Data popularity services 4. Use of opportunistic resources - Submission of simulation jobs to some of the HPC centers worldwide (TITAN, STAMPEDE) - Use of commercial Clouds for simulations: tests ongoing, but the price is still too high for regular productions


Download ppt "Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed."

Similar presentations


Ads by Google