Presentation is loading. Please wait.

Presentation is loading. Please wait.

High Performance Computing at Mercury Marine

Similar presentations


Presentation on theme: "High Performance Computing at Mercury Marine"— Presentation transcript:

1 High Performance Computing at Mercury Marine
Arden Anderson Mercury Marine Product Development and Engineering

2 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

3 Mercury Marine Founded in Cedarburg, WI in 1939
Mercury Marine began as the Kiekhaefer Corp. in 1939 Founded by E. Carl Kiekhaefer Mercury acquired by Brunswick Corporation in 1961 Leader in active recreation: marine engines, boating, bowling, billiards, and fitness Today, USA’s Only Outboard Manufacturer Employs 4,200 People Worldwide Fond du Lac, WI campus includes Corporate Offices Technology Center, R&D Offices Outboard Manufacturing (Casting, Machining, Assembly to Distribution) Mercury’s 1st Patent 3

4 The Most Comprehensive Product Offering In Recreational Marine
Outboard Engines (2.5 hp to 350 hp) Sterndrive Engines (135 hp to 1250 hp) All new or updated in last 5 years All updated to new emissions standard in last year Props / Rigging / P&A Land ‘N’ Sea / Attwood Diversified, Quality Products, Connected to Parent Corporation 4

5 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

6 How many compute cores do you use for your largest jobs?
Poll Question How many compute cores do you use for your largest jobs? Less than 4 4-16 17-64 More than 64

7 Fatigue & Hardware Correlation
Standard FEA Fatigue & Hardware Correlation Non-Linear Gaskets System Assemblies with Contact Sub-Modeling

8 Explicit FEA System level submerged object impact
Method development was presented at the 2008 Abaqus Users Conference

9 CFD Transient Internal Flow External Flow Two Phase Flow
Cavitation onset Vessel drag, heave, and pitch Flow distribution correlated to hardware Moving mesh propeller

10 Heat Transfer Enclosure Air Flow & Component Temperatures
Conjugate Heat Transfer for Temperature Distribution & Thermal Fatigue

11 Overview of Mercury Marine Design Analysis Group
Experience Simulation Methods Aerospace Automotive and Off-Highway Composites Dynamic Impact and Weapons Gas and Diesel Engine Hybrid Marine Structural Analysis Implicit Finite Element Explicit Finite Element Dynamic Analysis Fluid Dynamics Heat Transfer Engine Performance Analyst Workstations HPC System Pre and post processing Dual Xeon 5160 (4 core), 3.0 GHz Up to 16 GB RAM 64 bit Windows XP FEA and CFD solvers 80 core (10 nodes x 8 core/node) Up to 40 GB RAM per node InfiniBand switch Windows HPC Server 2008

12 How many compute cores do you use for your largest jobs?
Poll Question How many compute cores do you use for your largest jobs? Less than 4 4-16 17-64 More than 64 This slide is a placeholder for coming back to poll question responses

13 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

14 Evolution of Computing Systems at Mercury Marine
2004 2005 2007 Pre and post processing on Windows PC, 2 GB RAM Computing on HP Unix Workstation Single CPU 4-8GB RAM ~$200k for 10 boxes Memory limitations on pre-post and limited model size Minimal parallel- processing (CFD only) Updated processing capabilities with Linux compute server 4 CPU Itanium, 32GB RAM for FEA 6 CPU Opteron for CFD $125k server Increased model size with larger memory Parallel processing for FEA & CFD ~Same number of processors as previous system with large increases in speed and capability Updated pre-post (2004 PC’s) with 2x2 core Linux workstations 3.0 GHz 4-16 GB RAM More desktop memory for pre-processing Increased computing by clustering the new pre- post machines Small & mid-sized Standard FEA on pre- post machines using multi-cpu’s Introduce Windows HPC Server in 2009…

15 2009 HPC Decision INFLUENCING FACTORS GOALS
Emphasis on minimizing analysis time over maximizing computer & software utilization Limited availability to server room Linux support Cost Conscious Easy to implement Machine would run only Abaqus Reduce large run times by 2.5x or more Ability to handle larger future runs System needs to be supported by in-house IT support Re-evaluate software versus hardware balance

16 Limited access to Unix/Linux support group
Why Windows HPC Server? Limited access to Unix/Linux support group Unix/Linux support group has database expertise – little experience in high performance computing HPC projects lower priority than company database projects Larger Windows platform support group Benchmarking showed competitive run times Intuitive use and easy management Job Scheduler Add Node Wizard Heat Map

17 Mercury HPC System Detail, 2009
Windows Server 2008 HPC Edition 32 Core Server + Head Node 4 Compute nodes with 8 cores per node 40 GB/Node – 160 GB total Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 GigE switch 4 Compute Nodes X3450 Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz Drives: 2 x 750Gb SATA RAID 0

18 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

19 Justification Request from management to reduce run turn around time – some run times weeks as runs have become more detailed and complex Quicker feedback to avoid late tooling changes Need to minimize manpower down time Large software costs – need to maximize software investment

20 Budget Breakdown 2009 2010 Computers are small portion of budget
Budget skewed towards software over hardware Rebalancing hardware/software in 2009 slightly shifted this breakdown

21 Abaqus Token Balancing
Previous Abaqus token count was high to enable multiple simultaneous jobs on smaller machines Re-balance tokens from several small jobs to fewer large jobs Original 45 tokens 1 x 8 CPU 4 x 4 CPU New 40 tokens 2 x 16 CPU 1 x 4 CPU 1 x 32 CPU 2 x 4 CPU 3 x 8 CPU CPU’s Tokens 4 8 12 16 32 21

22 HPC System Costs (2009) System Buy Price with OS: $37,000 2 Year Lease Price: $16,000 per year Software re-scaled to match new system Incremental cost: $7,300 per year

23 Historic Productivity Increases
Continual improvement in productivity Large increases in analysis complexity

24 Abaqus S4b Implicit Benchmark
Cylinder Head Bolt-up 5,000,000 DOF 32 Gb Memory Run Time in Hours System 4 CPU 8 CPU 16 CPU 32 CPU Mercury Itanium Server Itanium 1.5 Ghz, Gig-E – 32 Gb 1.5 Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node 0.50 0.61 0.38

25 Mercury “Real World” Standard FEA
Block + Head + Bedplate 8,800,000 DOF 55Gb Memory Preload + Thermal + Reciprocating Forces * Picture of Abaqus benchmark S4b Run Time in Hours (Days) System 4 CPU 8 CPU 16 CPU 32 CPU Mercury Itanium Server Itanium 1.5 Ghz, Gig-E – 32 Gb 213 (9) Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node 64 (3) 37 (1.5) 31 (1.3)

26 Mercury “Real World” Explicit FEA
Outboard Impact 600,000 Elements dt = 3.5e-8s for 0.03s (857k increments) Run Time in Hours System 8 CPU 16 CPU 32 CPU Mercury Linux Cluster 4 nodes at 2 core/node 58 Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node 29.5 16 11

27 Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

28 Summary Mercury HPC has evolved over the last 5 years Each incremental step has lead to greater throughput and increased capabilities that have allowed us to better meet the demands of a fast paced product development cycle Our latest HPC server has delivered improvements in run times as high as 8x at a very affordable price We expect further gains in meshing productivity as we re-size runs to the new computing system

29 Progress Continues: Mercury HPC System Detail, 2010 Updates
Windows Server 2008 HPC Edition Add 48 cores to existing server (combined total of 80 cores) 6 Compute nodes with 8 cores per node 24 GB/Node Now running FEA and CFD on HPC system (~70/30 split) Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 4 Compute Nodes X3450 6 Compute Nodes X3550 InfiniBand switch Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz per node Drives: 2 x 750 GB SATA RAID 0 Processors: 2 x E5570 Xeon Quad-core, 3.0Ghz Memory: 24 GB RAM per node Drives: 2 x 500 GB SATA RAID 0

30 Thank You. Questions?

31 Microsoft HPC Server Case Study
Contact Info and Links Arden Anderson Microsoft HPC Server Case Study Crash Prediction for Marine Engine Systems at Abaqus Users Conference Available by searching conference archives for Mercury Marine:


Download ppt "High Performance Computing at Mercury Marine"

Similar presentations


Ads by Google