2004/12/22 1 Brief Outline of the Earth Simulator and Our Research Activities AND A Lesson Learnt for the Past Three Years Wataru Ohfuchi email@example.com Earth Simulator Center Japan Agency for Marine-Earth Science and Technology and Atmospheric and Oceanic Simulation Group and AFES Working Team
2004/12/22 4 Inside the Earth Simulator Building PN cabinets(320) IN cabinets(65) 65m (71yd) Double Floor for Cables Power Supply System Cartridge Tape Library System Magnetic Disk System Air Conditioning System Seismic Isolation System 50m (55yd)
2004/12/22 5 Comparison of PN Size about 6m Peak Performance: 64Gflops Electric Power: about 8kVA Air Cooling Peak Performance: 64Gflops Electric Power: about 90kVA Air Cooling about 7m NEC SX-4 (1 node) Earth Simulator 70cm 100cm
2004/12/22 6 Configuration of the Earth Simulator Shared Memory 16GB Arithmetic Processor #0 Arithmetic Processor #1 Arithmetic Processor #7 Shared Memory 16GB Arithmetic Processor #0 Arithmetic Processor #1 Arithmetic Processor #7 Shared Memory 16GB Arithmetic Processor #0 Arithmetic Processor #1 Arithmetic Processor #7 Processor Node #0Processor Node #1Processor Node #639 Total peak performance:40Tflops Total main memory:10TB Peak performance/AP: 8Gflops Peak performance/PN:64Gflops Shared memory/PN:16GB Interconnection Network (full crossbar switch) Total number of APs:5120 Total number of PNs: 640
2004/12/22 12 An Overview of AFES (AGCM for the Earth Simulator) –Primitive equation system (hydrostatic approximation) Valid (arguably) down to 10 km (T1279) –Spectral Eulerian –Physical processes Cumulus parameterizations (A-S, Kuo, MCA, Emanuel) Radiation (mstranX: Sekiguchi et al. 2004) Surface model: MATSIRO (Takata et al. 2004) Etc –Adopted from CCSR/NIES AGCM5.4.02 Center for Climate System Research, the Univ. Tokyo Japanese National Institute for Environmental Studies Rewritten totally from scratch with FORTRAN 90, MPI and microtasking
2004/12/22 14 AFES won the Gordon Bell Award for Peak Performance!!!
2004/12/22 15 Meso-scale Resolving T1279L96 Simulations Typhoons, wintertime cyclogenesis and Baiu-Meiyu front –Interactions between large-scale circulations and meso-scale phenomena –Self-organization of meso-scale circulations in larger circulation field Short-term (10 days to 2 weeks) –CPU power is NOT a problem; data size (~Tera bytes) is the problem
2004/12/22 21 Our ES Project 2004 Mechanism and predictability of atmospheric and oceanic variations induced by interactions between large-scale field and meso-scale phenomena –Project leader: Wataru Ohfuchi –FES models+THORPEX AFES –Sub-project leader: Takeshi Enomoto (ESC) –AGCM OFES –Sub-project leaders: Hideharu Sasaki (ESC), Hirofumi Sakuma (FRCGC), Yukio Maumoto (FRCGC/U. Tokyo) –MOM3-based OGCM CFES –Sub-project leader: Nobumasa Komori (ESC) –Coupled model: AFES + OIFES (OFES + IARC sea ice model) THORPEX –Sub-project leader: Tadashi Tsuyuki (NPD, JMA) –High-resolution singular vector method and predictability
2004/12/22 22 Summary With the combination of the ES and models well optimized for its architecture, now it is possible to conduct meaningful (ultra-)high resolution global simulations first in the history of computational atmospheric and oceanic sciences, and geophysical fluid dynamics. Interaction between meso-scale phenomena and larger-scale circulation can be studied. Scientifically new knowledge and contribution to society are expected.
2004/12/22 23 A Possible Future Direction of High Performance Computing in Atmospheric and Oceanic Sciences: A Lesson Learnt for the Past Three Years with the Earth Simulator The Earth Simulator was not unfortunately perfect, of course. What I foresee as future modeling strategy. What I foresee as future HPC in AOS. It will be published in Advances in Science: Earth Science edited by Prof. Peter Sammonds, Royal Societys Philosophical Transactions (2005).
2004/12/22 24 How Many Points Are There in the 10-km Mesh AGCM? T1269L96 –1279 spherical harmonics with the so-called triangular trancation. –3840 (longitude) X 1920 (latitude) X 96 (layers) = … –~700 M points… Assume Double Precision (8B) and 100 Variables… –~560 GB –Actually, the T1279L96 AFES needs about 1.2 TB of memory.
2004/12/22 25 How Much Data Are We Producing with the 10-km Mesh AGCM? One 3-D Snapshot. –2.6GB. Oh, We Need 6-hourly Output!!! Ten 3-D Variables!!! For One Day… –2.6GB X 4/day (6-hourly) X 10 variables = … –104GB/day. Oh, We Want to Integrate for 10 Days… –~1TB. Oh, We Are Climatologists!!! We Want to Integrate for 10000 Days… –~1PB. Oh, We Need Ten Sensitivity Tests!!! –10PB.
2004/12/22 26 Future HPC in the World (in 2010…) VERY Unfortunately HPC Hardware Business Is Currently Dominated by the IMPERIAL JAPAN and the US of A!!! Suppose Emerging Super Computing Country, Taiwan, Republic of China, Takes Over Submerging J and USA within a Few Years. The Earth System Simulator, the National Taiwan Normal University. –1 PFlops machine (25 larger than the ES). –1 Exabyte of hardisk/PROJECT!!! –1 Zettabyte of long-term storage/PROJECT!!!
2004/12/22 27 So, What We Can Do with The Earth System Simulator at the National Taiwan Normal University 2010? When You Increase the Resolution by the Factor of Two… –2 (longitutde) X 2 (latitude) X 2 (vertical levels) X 2 (time) = 16 –~ 25. The Current Biggest Global Atmospheric Simulation Project on the ES is ~3-km Mesh (Nonhydorstatic) –So, ~1.5-km mesh simulation. –Sorry, its not cloud resolving, yet!!!
2004/12/22 28 We Need to Think Better Than That!!! Multi-scale Modeling. Stand-alone Global Hydrostatic Model. Stand-alone Regional Nonhydrostatic Model. –As super-parameterization. Stand-alone 3-D Turbulence Model. Super-prameterization-like link between these models. We may have to go down to explicit cloud physics. Of course, 3-D radiation!!!
2004/12/22 29 Conclusions 1 HPC is not only a number crunching capability. –Linpack has become totally obsolete. Data handling is much much much much much much….MORE important. We are already in the middle of the storm of data! –Hard disk. –Long-term data storage. –Software. But still we need to think much better. –Just increasing resolution does not seem to lead to a breakthrough.
2004/12/22 30 Conclusions 2 A HUGE HPC System should be used as a whole. –The ES consists of 640 nodes. –Sorry, those jobs that require less than ~320 nodes should go away. Expensive vs. Cheapo –Vector vs. Scalar? –We need to think about cost effectiveness. –It may depend on problems. GRID? –Probably very good for data sharing. –Simulations? We need to integrate science and engineering. –At least wee need to understand both and have strong opinions.