Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing at Fermilab D. Petravick Fermilab August 16, 2007.

Similar presentations


Presentation on theme: "Computing at Fermilab D. Petravick Fermilab August 16, 2007."— Presentation transcript:

1 Computing at Fermilab D. Petravick Fermilab August 16, 2007

2 FNAL Scientific Computing CDF D0 Lattice QCD CMS Tier 1 center Experimental Astrophysics Theoretical Astrophysics –partner w/Kavli Inst. U Chicago Accelerator Simulations Neutrino Program

3 HEP Computing at Fermilab Fermilab has a substantial and long established program in capacity computing as part of its experimental HEP program. Roughly –Event simulation –online filtering (1:10,000) –“production” (instrumental -> physical) –Analysis -- study of the data.

4 HEP software environment Large Codes, large frameworks, shared between tasks, variety of languages –Benefits from homogenous development and deployment environments. –Lap/desktop -> into framework -> batch –Codes are likely larger, more complex than many HPC codes. Environment is overwhelmingly linux. –Scant traction for other environments. –Codes are likely larger, more complex than many HPC codes.

5 Lattice at Fermilab National collaboration oversees computing resources. –Group see roles for DOE Exascale and local computing. –Relatively large amount of shared data, techniques. –Configurations re-used by different groups. 3000 cores at Fermilab –Cluster approach –Coupled with Myrinet and Infiniband. –Additional procurement Federal FY 08-09.

6 Cosmological Computing Cosmological simulations are powerful tools to understand the growth of structure in the universe… is essential for extracting precision information about fundamental physics such as dark energy and dark matter from upcoming astronomical surveys. Roadmap calls for –Formation of larger collaborations –Computational methods and data are more diverse than lattice. –10000 cores by 2012. –12 PB data requirement

7 Other elements of lab program Not mentioned in this overview: –Experimental Astrophysics (Sloan Digital Sky Survey, Dark Energry Survey) –Other Element of Theoretical Astrophysics –Acceleratior simulations –Neutrino program Synergia Simulation Of bunching in the Tevatron Element of CDMS detector SDSS Lensing

8 Significant campus Storage and Data Movement ~6.5 PB on tape. Few PB of disk Ethernet SAN > 40 gbps disk movement 300 T/mo ingest ~60/TB day tape movment

9 Which induces the need for networking + Esnet MAN, and Wide Area Network R&D Group.

10 ESnet4 IP + SDN, 2009 Configuration (Est.) Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (? ) Atlanta Nashville Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site OC48 (0) (1) ESnet IP core ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) 3 2 2 2 2 2 2 3 3 3 1 2 2 3 3 3 3 3 2 (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) West Chicago MAN Long Island MAN Newport News - Elite San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC FNAL 600 W. Chicago Starlight ANL USLHCNet BNL 111-8th 32 AoA, NYC USLHCNet JLab ELITE ODU MATP Wash., DC 2 2 (3) (21) Uncorrected Map –Space for other disk.

11 HEP capacity, adapted… HEP has adopted grid techniques as the basis for a world wide computing infrastructure. Both data and Jobs are shipped about. Fermilab leadership –Fermigrid -- campus –Open Science Grid - national, –WLCG grid scopes.

12 2002 Power Forecast

13 Computing Centers Three computing buildings. FCC, -- > 20 year old purpose built LCC, GCC: built on former experimental halls w/ substantial power infrastructure. LCC GCC FCC

14 Type of Space Thinking continues to evolve, notion is that there is: –Space for new computing systems. –Space for long-lived, “tape” storage. –Space for generator backed disk.

15 GCC Grid Computing Center Legacy Building First ExpansionSecond Expansion CRCCRBCRATape

16 Just in Time Delivery FCC600 LCC710 CRA840 CRB840 CRC840 Tape45 Total3875 KVA, excluding cooling

17 Summary Fermilab has a diverse scientific computing program. Forecasting has shown a need to continually expand computing facilities. Fermilab has expanded its capacity incrementally. Notwithstanding that, the lab’s program includes substantial future growth.


Download ppt "Computing at Fermilab D. Petravick Fermilab August 16, 2007."

Similar presentations


Ads by Google