Presentation is loading. Please wait.

Presentation is loading. Please wait.

Farms/Clusters of the future Large Clusters O(1000), any existing examples ? YesSupercomputing, PC clusters LLNL, Los Alamos, Google No long term experience.

Similar presentations


Presentation on theme: "Farms/Clusters of the future Large Clusters O(1000), any existing examples ? YesSupercomputing, PC clusters LLNL, Los Alamos, Google No long term experience."— Presentation transcript:

1 Farms/Clusters of the future Large Clusters O(1000), any existing examples ? YesSupercomputing, PC clusters LLNL, Los Alamos, Google No long term experience (except google) ASCI Installation : Powerstation (xMW), Well organized recover management (see Eniac)

2 Farms/Clusters of the future Reliability By Martin H. Weik, 1961 Ordnance Ballistic Research Laboratories, Aberdeen Proving Ground, MD The ENIAC's first few years at the Aberdeen Proving Ground were difficult ones for the operating and maintenance crews. … The result was a huge preventive-maintenance and testing program, which, in the end, led to some major modifications of the system. Tubes were life-tested, and statistical data on the failures were compiled. This information led to many improvements in vacuum tubes themselves. Procurement of large quantities of improved, reliable tubes, however, became a difficult problem. Power-line fluctuations and power failures made continuous operation directly off transformer mains an impossibility. The substantial quantity of heat which had to be dissipated into the warm, humid Aberdeen atmosphere created a heat-removal problem of major proportions. Down times were long; error-free running periods were short.

3 Farms/Clusters of the future Avoid local disks ?

4 Future CPUs - Low Power (mobile ?) ?

5 Architecture Results Amanda Simulation Amanda Reconstruction Host: oceanide1, new desktop PC, 533 MHz FSB Pentium4, 2.40GHz, 512 KB Cache, 256 MB Memory1625.460u, 10.530s138.680u, 0.140s Host: minerva, theory PC, 400 MHz FSB XEON-P4, 2.00GHz, 512 KB Cache, 2 GB Memory 1968.150u, 4.620s 167.110u, 0.120s Host: euterpe, network services Pentium III (Tualatin), 1266MHz, 512 KB Cache, 1 GB Memory 3158.580u, 9.370s 154.830u, 0.740s Host: ice53, farm node Pentium III (Coppermine), 800 MHz, 256 KB cache 512 MB 4889.670u, 17.010s250.290u, 0.310s

6 LQCD - Dirac Operator Benchmark (SSE) 16x16 3, single P4/XEON CPU Dirac operator Linear Algebra MFLOPS Intensive cache pre-fetchNo benefit from cache

7 Future Cluster/Farm Architectures - Blade Servers ? NEXCOM – Low voltage blade server 200 low voltage Intel XEON CPUs (1.6 GHz – 30W) in a 42U Rack Integrated Gbit Ethernet network Mellanox – Infiniband blade server Single XEON Blades connected via a 10 Gbit (4X) Infiniband network MEGWARE, NCSA, Ohio State University

8 VID PSI IMVP IV Hub Interface ICH4-M DDR memory AC’97 2.3 ModemModem Mobile Optimized Processor Target Avg. Power < 1W Mobile Optimized Processor Target Avg. Power < 1W USB 2.0 APIC Enabled Integrated Graphics Integrated Graphics DVO (2 ports) LVDS TPV DVO Optimized Power Supply 2 ATA66/100 IDE Channels 6 USB (1.1/2.0) Ports Integrated LAN PCI 33 MHz Cardbus 400MHz Low Power Processor System Bus 802.11b802.11b Intel® Pentium® M Processor Intel® 855GM Intel® Centrino ™ Mobile Technology Intel® 855GM chipset Intel® Pro/Wireless Network Connection

9 Benchmark list hyper threading vs non hyper threading Architecture Results Amanda Simulation Amanda Reconstruction Theorie form3 Host: cube3, farm node, 400 MHz FSB XEON-P4, 2.4 GHz, 512 KB cache, 2 GB Memeory Non hyper-threading kernel 1626.200u 1.480s131.250u 0.320sTime = 61.33 sec Generated terms = 35999900 Host: cube8, farm node, 400 MHz FSB XEON-P4, 2.4 GHz, 512 KB cache, 2 GB Memeory Hyper-threading kernel 1617.420u 1.530s154.210u 0.430sTime = 62.52 sec Generated terms = 35999900


Download ppt "Farms/Clusters of the future Large Clusters O(1000), any existing examples ? YesSupercomputing, PC clusters LLNL, Los Alamos, Google No long term experience."

Similar presentations


Ads by Google