Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grids e HEP. Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) 10 15 Bytes 10 3 Terabytes 1 Petabyte.

Similar presentations


Presentation on theme: "Grids e HEP. Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) 10 15 Bytes 10 3 Terabytes 1 Petabyte."— Presentation transcript:

1 Grids e HEP

2

3

4

5

6 Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) 10 15 Bytes 10 3 Terabytes 1 Petabyte 10 6 Gigabytes 10 9 Megabytes 10 12 Kilobytes Um ano de dados no LHC (~ 20 Petabyte) Como analisar estes dados?

7 High throughput and steering 1000 Gb/s + QoS for Control Channel 100s of users0.091 Gb/s (1 TBy/day) Genomics Data & Computation Computat’l steering and collaborations 1000 Gb/s N*N multicast0.013 Gb/s (1 TByte/week) Astrophysics Time critical throughput N x 1000 Gb/s 0.198 Gb/s (500MB/ 20 sec. burst) 0.066 Gb/s (500 MB/s burst) Fusion Energy Remote control and time critical throughput 1000 Gb/s + QoS for Control Channel 1 Gb/sNot yet started SNS NanoScience High bulk throughput N x 1000 Gb/s 160-200 Gb/s0.5 Gb/sClimate (Data & Computation) High bulk throughput 1000 Gb/s 100 Gb/s0.5 Gb/sHigh Energy Physics Remarks5-10 Years End2End Throughput 5 years End2End Throughput Today End2End ThroughputScience Areas Evolving Quantitative Science Requirements for Networks (DOE High Perf. Network Workshop) Slide courtesy of H. Newman

8 CMS Experiment DISUN: Data Intensive Science University Network Online System CERN Computer Center FNAL Korea Russia UK UCSB 0.2 - 1.5 GB/s ~10 Gb/s 10 Gb/s ~10 Gb/s Tier 0 Tier 1 Tier 3 Tier 2 Physics caches PCs UCR UFl UCSD Caltech  10s of Petabytes/yr by ~2008  1000 Petabytes in < 10 yrs?  > 50% of CPU in Tier2s UCLA Tier 4 DISUN 10+ Gb/s UERJ USP

9 HEPGRID-BRAZIL is a project to build a Grid that at -Regional Level will include CBPF,UFRJ,UFRGS,UFBA,UERJ & UNESP -International Level will be integrated with CMS Grid based at CERN; focal points include iVGDL & bilateral projects with the Caltech Group Tier2 HEPGRID Brazil France Italy USA Germany BRAZIL 622 Mbps UFRGS UERJ UFRJ T1 Individual Machines On line systems Brazilian HEPGRID CBPF UNESP/USP SPRACE GIGA CERN 2.5 - 10 Gbps UFBA UERJ Regional Tier2 Ctr T4 T0 +T1 T2  T1 T3  T2 EU 622 Mbps

10 Ambiente de Análise Grid

11 SERVERS Raid Disks NOBREAKS All Switches are behind the racks. 5 LCD Monitors to be used for: Monitor, Monalisa, Communication and so on. We leave one for a group member to be trainning and develop software. 100 –Double CPU of Itautec/Pentium 4 2.4 GHz, 1 GB RAM, 40 GB HD Starting with 7 TB Raid HD. (Now at UERJ) T2 – HEPGRID BRAZIL

12 US CMS and UERJ Tier2s and DISUN in Grid3 & the Open Science Grid Grid3: A National Grid Infrastructure  35 sites, 3500 CPUs: Univ. + 4 Nat’l labs  Part of LHC Grid  Running since October 2003  HEP, LIGO, SDSS, Biology, Computer Sci. Transition to Open Science Grid: http://www.openscience.org +Brazil (UERJ, USP) 7 US CMS Tier2s; Caltech, Florida, UCSD, UWisc Form DISUN

13

14 Principais Sws em uso Linux Globus Monalisa Ganglia Phedex (BD Centralizado) Condor Monarc Sws de análise física (Fortran e C++)


Download ppt "Grids e HEP. Concorde (15 Km) Balloon (30 Km) CD stack with 1 year LHC data! (~ 20 Km) Mt. Blanc (4.8 Km) 10 15 Bytes 10 3 Terabytes 1 Petabyte."

Similar presentations


Ads by Google