Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Performance Computing at Mercury Marine Arden Anderson Mercury Marine Product Development and Engineering.

Similar presentations


Presentation on theme: "High Performance Computing at Mercury Marine Arden Anderson Mercury Marine Product Development and Engineering."— Presentation transcript:

1 High Performance Computing at Mercury Marine Arden Anderson Mercury Marine Product Development and Engineering

2 2 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

3 33 Mercury Marine began as the Kiekhaefer Corp. in 1939 –Founded by E. Carl Kiekhaefer Mercury acquired by Brunswick Corporation in 1961 –Leader in active recreation: marine engines, boating, bowling, billiards, and fitness Mercurys 1 st Patent Today, USAs Only Outboard Manufacturer Employs 4,200 People Worldwide Fond du Lac, WI campus includes –Corporate Offices –Technology Center, R&D Offices –Outboard Manufacturing (Casting, Machining, Assembly to Distribution) Mercury Marine Founded in Cedarburg, WI in 1939

4 44 The Most Comprehensive Product Offering In Recreational Marine Outboard Engines (2.5 hp to 350 hp)Sterndrive Engines (135 hp to 1250 hp) Props / Rigging / P&ALand N Sea / Attwood All new or updated in last 5 yearsAll updated to new emissions standard in last year Diversified, Quality Products, Connected to Parent Corporation

5 5 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

6 6 Poll Question 3)How many compute cores do you use for your largest jobs? a.Less than 4 b.4-16 c d.More than 64

7 7 Standard FEA Fatigue & Hardware CorrelationNon-Linear Gaskets Sub-Modeling System Assemblies with Contact

8 8 Explicit FEA System level submerged object impact –Method development was presented at the 2008 Abaqus Users Conference

9 9 CFD Transient Internal Flow Flow distribution correlated to hardware External Flow Moving mesh propeller Cavitation onset Two Phase Flow Vessel drag, heave, and pitch

10 10 Heat Transfer Enclosure Air Flow & Component Temperatures Conjugate Heat Transfer for Temperature Distribution & Thermal Fatigue

11 11 Overview of Mercury Marine Design Analysis Group Structural Analysis –Implicit Finite Element –Explicit Finite Element Dynamic Analysis Fluid Dynamics Heat Transfer Engine Performance Pre and post processing Dual Xeon 5160 (4 core), 3.0 GHz Up to 16 GB RAM 64 bit Windows XP Simulation MethodsExperience FEA and CFD solvers 80 core (10 nodes x 8 core/node) Up to 40 GB RAM per node InfiniBand switch Windows HPC Server 2008 Aerospace Automotive and Off-Highway Composites Dynamic Impact and Weapons Gas and Diesel Engine Hybrid Marine HPC SystemAnalyst Workstations

12 12 Poll Question 3)How many compute cores do you use for your largest jobs? a.Less than 4 b.4-16 c d.More than 64 This slide is a placeholder for coming back to poll question responses

13 13 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

14 14 Evolution of Computing Systems at Mercury Marine Pre and post processing on Windows PC, 2 GB RAM Computing on HP Unix Workstation –Single CPU –4-8GB RAM ~$200k for 10 boxes Memory limitations on pre-post and limited model size Minimal parallel- processing (CFD only) Updated processing capabilities with Linux compute server –4 CPU Itanium, 32GB RAM for FEA –6 CPU Opteron for CFD $125k server Increased model size with larger memory Parallel processing for FEA & CFD ~Same number of processors as previous system with large increases in speed and capability Updated pre-post (2004 PCs) with 2x2 core Linux workstations –3.0 GHz –4-16 GB RAM More desktop memory for pre-processing Increased computing by clustering the new pre- post machines Small & mid-sized Standard FEA on pre- post machines using multi-cpus Introduce Windows HPC Server in 2009…

15 HPC Decision Emphasis on minimizing analysis time over maximizing computer & software utilization Limited availability to server room Linux support Cost Conscious Easy to implement Machine would run only Abaqus Reduce large run times by 2.5x or more Ability to handle larger future runs System needs to be supported by in-house IT support Re-evaluate software versus hardware balance INFLUENCING FACTORSGOALS

16 16 Why Windows HPC Server? Limited access to Unix/Linux support group Unix/Linux support group has database expertise – little experience in high performance computing HPC projects lower priority than company database projects Larger Windows platform support group Benchmarking showed competitive run times Intuitive use and easy management –Job Scheduler –Add Node Wizard –Heat Map

17 17 Mercury HPC System Detail, 2009 Windows Server 2008 HPC Edition 32 Core Server + Head Node 4 Compute nodes with 8 cores per node 40 GB/Node – 160 GB total Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 4 Compute Nodes X3450 Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz Drives: 2 x 750Gb SATA RAID 0 GigE switch

18 18 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

19 19 Justification Request from management to reduce run turn around time – some run times weeks as runs have become more detailed and complex Quicker feedback to avoid late tooling changes Need to minimize manpower down time Large software costs – need to maximize software investment

20 20 Budget Breakdown Computers are small portion of budget Budget skewed towards software over hardware Rebalancing hardware/software in 2009 slightly shifted this breakdown

21 21 Abaqus Token Balancing Previous Abaqus token count was high to enable multiple simultaneous jobs on smaller machines Re-balance tokens from several small jobs to fewer large jobs CPUsTokens Original 45 tokens 1 x 8 CPU 4 x 4 CPU New 40 tokens 2 x 16 CPU 1 x 4 CPU 1 x 32 CPU 2 x 4 CPU 3 x 8 CPU

22 22 HPC System Costs (2009) System Buy Price with OS: $37,000 2 Year Lease Price: $16,000 per year Software re-scaled to match new system Incremental cost: $7,300 per year

23 23 Historic Productivity Increases Continual improvement in productivity Large increases in analysis complexity

24 24 Abaqus S4b Implicit Benchmark Cylinder Head Bolt-up 5,000,000 DOF 32 Gb Memory System4 CPU8 CPU16 CPU32 CPU Mercury Itanium Server Itanium 1.5 Ghz, Gig-E – 32 Gb 1.5 Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node Run Time in Hours

25 25 Mercury Real World Standard FEA Block + Head + Bedplate 8,800,000 DOF 55Gb Memory Preload + Thermal + Reciprocating Forces Run Time in Hours (Days) System4 CPU8 CPU16 CPU32 CPU Mercury Itanium Server Itanium 1.5 Ghz, Gig-E – 32 Gb 213 (9) Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node 64 (3) 37 (1.5) 31 (1.3) * Picture of Abaqus benchmark S4b

26 26 Mercury Real World Explicit FEA Outboard Impact 600,000 Elements dt = 3.5e-8s for 0.03s (857k increments) System8 CPU16 CPU32 CPU Mercury Linux Cluster 4 nodes at 2 core/node 58 Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node Run Time in Hours

27 27 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

28 28 Summary Mercury HPC has evolved over the last 5 years Each incremental step has lead to greater throughput and increased capabilities that have allowed us to better meet the demands of a fast paced product development cycle Our latest HPC server has delivered improvements in run times as high as 8x at a very affordable price We expect further gains in meshing productivity as we re-size runs to the new computing system

29 29 Progress Continues: Mercury HPC System Detail, 2010 Updates Windows Server 2008 HPC Edition Add 48 cores to existing server (combined total of 80 cores) –6 Compute nodes with 8 cores per node –24 GB/Node Now running FEA and CFD on HPC system (~70/30 split) Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 4 Compute Nodes X3450 Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz per node Drives: 2 x 750 GB SATA RAID 0 6 Compute Nodes X3550 Processors: 2 x E5570 Xeon Quad-core, 3.0Ghz Memory: 24 GB RAM per node Drives: 2 x 500 GB SATA RAID 0 InfiniBand switch

30 30 Thank You. Questions?

31 31 Contact Info and Links Arden Anderson Microsoft HPC Server Case Study –http://www.microsoft.com/casestudies/Windows-HPC-Server- 2008/Mercury-Marine/Manufacturer-Adopts-Windows-Server- Based-Cluster-for-Cost-Savings-Improved-Designs/ http://www.microsoft.com/casestudies/Windows-HPC-Server- 2008/Mercury-Marine/Manufacturer-Adopts-Windows-Server- Based-Cluster-for-Cost-Savings-Improved-Designs/ Crash Prediction for Marine Engine Systems at 2008 Abaqus Users Conference –Available by searching conference archives for Mercury Marine:


Download ppt "High Performance Computing at Mercury Marine Arden Anderson Mercury Marine Product Development and Engineering."

Similar presentations


Ads by Google