High Performance Computing at Mercury Marine

Slides:



Advertisements
Similar presentations
Symantec 2010 Windows 7 Migration EMEA Results. Methodology Applied Research performed survey 1,360 enterprises worldwide SMBs and enterprises Cross-industry.
Advertisements

Symantec 2010 Windows 7 Migration Global Results.
1 Nia Sutton Becta Total Cost of Ownership of ICT in schools.
1
1 Vorlesung Informatik 2 Algorithmen und Datenstrukturen (Parallel Algorithms) Robin Pomplun.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
Operations Management Maintenance and Reliability Chapter 17
© 2008 Pearson Addison Wesley. All rights reserved Chapter Seven Costs.
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
Cognitive Radio Communications and Networks: Principles and Practice By A. M. Wyglinski, M. Nekovee, Y. T. Hou (Elsevier, December 2009) 1 Chapter 12 Cross-Layer.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 4 Computing Platforms.
Copyright © 2013 Elsevier Inc. All rights reserved.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 1 Embedded Computing.
Myra Shields Training Manager Introduction to OvidSP.
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
RXQ Customer Enrollment Using a Registration Agent (RA) Process Flow Diagram (Move-In) Customer Supplier Customer authorizes Enrollment ( )
David Burdett May 11, 2004 Package Binding for WS CDL.
1 RA I Sub-Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Casablanca, Morocco, 20 – 22 December 2005 Status of observing programmes in RA I.
Microsoft Access 2007 Advanced Level. © Cheltenham Courseware Pty. Ltd. Slide No 2 Forms Customisation.
Properties of Real Numbers CommutativeAssociativeDistributive Identity + × Inverse + ×
Exit a Customer Chapter 8. Exit a Customer 8-2 Objectives Perform exit summary process consisting of the following steps: Review service records Close.
CALENDAR.
FACTORING ax2 + bx + c Think “unfoil” Work down, Show all steps.
Year 6 mental test 10 second questions
1 Click here to End Presentation Software: Installation and Updates Internet Download CD release NACIS Updates.
Our Digital World Second Edition
Break Time Remaining 10:00.
© © QA Software Pty Ltd All rights reserved 1 Project Information Management Tools Inspection and Defects Management System for Projects By QA Software.
4.1 © 2004 Pearson Education, Inc. Exam Managing and Maintaining a Microsoft® Windows® Server 2003 Environment Lesson 4: Organizing a Disk for Data.
Chapter 1 Introduction to the Programmable Logic Controllers.
ETS4 - What's new? - How to start? - Any questions?
PP Test Review Sections 6-1 to 6-6
EIS Bridge Tool and Staging Tables September 1, 2009 Instructor: Way Poteat Slide: 1.
An Application of Linear Programming Lesson 12 The Transportation Model.
Mohamed ABDELFATTAH Vaughn BETZ. 2 Why NoCs on FPGAs? Embedded NoCs Power Analysis
1 Sizing the Streaming Media Cluster Solution for a Given Workload Lucy Cherkasova and Wenting Tang HPLabs.
CS 6143 COMPUTER ARCHITECTURE II SPRING 2014 ACM Principles and Practice of Parallel Programming, PPoPP, 2006 Panel Presentations Parallel Processing is.
1 Application of for Predicting Indoor Airflow and Thermal Comfort.
Operating Systems Operating Systems - Winter 2011 Dr. Melanie Rieback Design and Implementation.
Operating Systems Operating Systems - Winter 2012 Dr. Melanie Rieback Design and Implementation.
Operating Systems Operating Systems - Winter 2010 Chapter 3 – Input/Output Vrije Universiteit Amsterdam.
Sample Service Screenshots Enterprise Cloud Service 11.3.
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
1 RA III - Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Buenos Aires, Argentina, 25 – 27 October 2006 Status of observing programmes in RA.
Basel-ICU-Journal Challenge18/20/ Basel-ICU-Journal Challenge8/20/2014.
1..
Defect Tolerance for Yield Enhancement of FPGA Interconnect Using Fine-grain and Coarse-grain Redundancy Anthony J. YuGuy G.F. Lemieux September 15, 2005.
© 2012 National Heart Foundation of Australia. Slide 2.
Adding Up In Chunks.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Introduction to Computer Administration Introduction.
1 Processes and Threads Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
Understanding Generalist Practice, 5e, Kirst-Ashman/Hull
Artificial Intelligence
KAIST Computer Architecture Lab. The Effect of Multi-core on HPC Applications in Virtualized Systems Jaeung Han¹, Jeongseob Ahn¹, Changdae Kim¹, Youngjin.
Macromedia Dreamweaver MX 2004 – Design Professional Dreamweaver GETTING STARTED WITH.
Equal or Not. Equal or Not
: 3 00.
5 minutes.
Analyzing Genes and Genomes
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Essential Cell Biology
Clock will move after 1 minute
PSSA Preparation.
Essential Cell Biology
Mani Srivastava UCLA - EE Department Room: 6731-H Boelter Hall Tel: WWW: Copyright 2003.
Physics for Scientists & Engineers, 3rd Edition
Energy Generation in Mitochondria and Chlorplasts
Installing Windows XP Professional Using Attended Installation Slide 1 of 30Session 8 Ver. 1.0 CompTIA A+ Certification: A Comprehensive Approach for all.
3 - 1 Copyright McGraw-Hill/Irwin, 2005 Markets Demand Defined Demand Graphed Changes in Demand Supply Defined Supply Graphed Changes in Supply Equilibrium.
Presentation transcript:

High Performance Computing at Mercury Marine Arden Anderson Mercury Marine Product Development and Engineering

Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

Mercury Marine Founded in Cedarburg, WI in 1939 Mercury Marine began as the Kiekhaefer Corp. in 1939 Founded by E. Carl Kiekhaefer Mercury acquired by Brunswick Corporation in 1961 Leader in active recreation: marine engines, boating, bowling, billiards, and fitness Today, USA’s Only Outboard Manufacturer Employs 4,200 People Worldwide Fond du Lac, WI campus includes Corporate Offices Technology Center, R&D Offices Outboard Manufacturing (Casting, Machining, Assembly to Distribution) Mercury’s 1st Patent 3

The Most Comprehensive Product Offering In Recreational Marine Outboard Engines (2.5 hp to 350 hp) Sterndrive Engines (135 hp to 1250 hp) All new or updated in last 5 years All updated to new emissions standard in last year Props / Rigging / P&A Land ‘N’ Sea / Attwood Diversified, Quality Products, Connected to Parent Corporation 4

Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

How many compute cores do you use for your largest jobs? Poll Question How many compute cores do you use for your largest jobs? Less than 4 4-16 17-64 More than 64

Fatigue & Hardware Correlation Standard FEA Fatigue & Hardware Correlation Non-Linear Gaskets System Assemblies with Contact Sub-Modeling

Explicit FEA System level submerged object impact Method development was presented at the 2008 Abaqus Users Conference

CFD Transient Internal Flow External Flow Two Phase Flow Cavitation onset Vessel drag, heave, and pitch Flow distribution correlated to hardware Moving mesh propeller

Heat Transfer Enclosure Air Flow & Component Temperatures Conjugate Heat Transfer for Temperature Distribution & Thermal Fatigue

Overview of Mercury Marine Design Analysis Group Experience Simulation Methods Aerospace Automotive and Off-Highway Composites Dynamic Impact and Weapons Gas and Diesel Engine Hybrid Marine Structural Analysis Implicit Finite Element Explicit Finite Element Dynamic Analysis Fluid Dynamics Heat Transfer Engine Performance Analyst Workstations HPC System Pre and post processing Dual Xeon 5160 (4 core), 3.0 GHz Up to 16 GB RAM 64 bit Windows XP FEA and CFD solvers 80 core (10 nodes x 8 core/node) Up to 40 GB RAM per node InfiniBand switch Windows HPC Server 2008

How many compute cores do you use for your largest jobs? Poll Question How many compute cores do you use for your largest jobs? Less than 4 4-16 17-64 More than 64 This slide is a placeholder for coming back to poll question responses

Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

Evolution of Computing Systems at Mercury Marine 2004 2005 2007 Pre and post processing on Windows PC, 2 GB RAM Computing on HP Unix Workstation Single CPU 4-8GB RAM ~$200k for 10 boxes Memory limitations on pre-post and limited model size Minimal parallel- processing (CFD only) Updated processing capabilities with Linux compute server 4 CPU Itanium, 32GB RAM for FEA 6 CPU Opteron for CFD $125k server Increased model size with larger memory Parallel processing for FEA & CFD ~Same number of processors as previous system with large increases in speed and capability Updated pre-post (2004 PC’s) with 2x2 core Linux workstations 3.0 GHz 4-16 GB RAM More desktop memory for pre-processing Increased computing by clustering the new pre- post machines Small & mid-sized Standard FEA on pre- post machines using multi-cpu’s Introduce Windows HPC Server in 2009…

2009 HPC Decision INFLUENCING FACTORS GOALS Emphasis on minimizing analysis time over maximizing computer & software utilization Limited availability to server room Linux support Cost Conscious Easy to implement Machine would run only Abaqus Reduce large run times by 2.5x or more Ability to handle larger future runs System needs to be supported by in-house IT support Re-evaluate software versus hardware balance

Limited access to Unix/Linux support group Why Windows HPC Server? Limited access to Unix/Linux support group Unix/Linux support group has database expertise – little experience in high performance computing HPC projects lower priority than company database projects Larger Windows platform support group Benchmarking showed competitive run times Intuitive use and easy management Job Scheduler Add Node Wizard Heat Map

Mercury HPC System Detail, 2009 Windows Server 2008 HPC Edition 32 Core Server + Head Node 4 Compute nodes with 8 cores per node 40 GB/Node – 160 GB total Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 GigE switch 4 Compute Nodes X3450 Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz Drives: 2 x 750Gb SATA RAID 0

Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

Justification Request from management to reduce run turn around time – some run times 1 - 2 weeks as runs have become more detailed and complex Quicker feedback to avoid late tooling changes Need to minimize manpower down time Large software costs – need to maximize software investment

Budget Breakdown 2009 2010 Computers are small portion of budget Budget skewed towards software over hardware Rebalancing hardware/software in 2009 slightly shifted this breakdown

Abaqus Token Balancing Previous Abaqus token count was high to enable multiple simultaneous jobs on smaller machines Re-balance tokens from several small jobs to fewer large jobs Original 45 tokens 1 x 8 CPU 4 x 4 CPU   New 40 tokens 2 x 16 CPU 1 x 4 CPU 1 x 32 CPU 2 x 4 CPU 3 x 8 CPU CPU’s Tokens 4 8 12 16 32 21

HPC System Costs (2009) System Buy Price with OS: $37,000 2 Year Lease Price: $16,000 per year Software re-scaled to match new system Incremental cost: $7,300 per year

Historic Productivity Increases Continual improvement in productivity Large increases in analysis complexity

Abaqus S4b Implicit Benchmark Cylinder Head Bolt-up 5,000,000 DOF 32 Gb Memory Run Time in Hours System 4 CPU 8 CPU 16 CPU 32 CPU Mercury Itanium Server Itanium 1.5 Ghz, Gig-E – 32 Gb 1.5 Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node 0.50 0.61 0.38

Mercury “Real World” Standard FEA Block + Head + Bedplate 8,800,000 DOF 55Gb Memory Preload + Thermal + Reciprocating Forces * Picture of Abaqus benchmark S4b Run Time in Hours (Days) System 4 CPU 8 CPU 16 CPU 32 CPU Mercury Itanium Server Itanium 1.5 Ghz, Gig-E – 32 Gb 213 (9) Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node 64 (3) 37 (1.5) 31 (1.3)

Mercury “Real World” Explicit FEA Outboard Impact 600,000 Elements dt = 3.5e-8s for 0.03s (857k increments) Run Time in Hours System 8 CPU 16 CPU 32 CPU Mercury Linux Cluster 4 nodes at 2 core/node 58 Mercury HPC System E5472 Xeon 3.0Ghz, Gig-E – 32 Gb/node 29.5 16 11

Outline About Mercury Marine Engineering simulation capabilities Progression of computing systems HPC system cost and justification Summary

Summary Mercury HPC has evolved over the last 5 years Each incremental step has lead to greater throughput and increased capabilities that have allowed us to better meet the demands of a fast paced product development cycle Our latest HPC server has delivered improvements in run times as high as 8x at a very affordable price We expect further gains in meshing productivity as we re-size runs to the new computing system

Progress Continues: Mercury HPC System Detail, 2010 Updates Windows Server 2008 HPC Edition Add 48 cores to existing server (combined total of 80 cores) 6 Compute nodes with 8 cores per node 24 GB/Node Now running FEA and CFD on HPC system (~70/30 split) Head Node X3650 Processors: 2 x E5440 Xeon Quad 2.8GHz/12L2/1333bus Memory: 16 GB 667 MHz Hard Drives: 6 x 1.0 TB SATA in Raid 10 4 Compute Nodes X3450 6 Compute Nodes X3550 InfiniBand switch Processors: 2 x E5472 Xeon Quad 3.0Ghz/12L2/1600bus Memory: 40 GB 800MHz per node Drives: 2 x 750 GB SATA RAID 0 Processors: 2 x E5570 Xeon Quad-core, 3.0Ghz Memory: 24 GB RAM per node Drives: 2 x 500 GB SATA RAID 0

Thank You. Questions?

Microsoft HPC Server Case Study Contact Info and Links Arden Anderson arden.anderson@mercmarine.com Microsoft HPC Server Case Study http://www.microsoft.com/casestudies/Windows-HPC-Server-2008/Mercury-Marine/Manufacturer-Adopts-Windows-Server-Based-Cluster-for-Cost-Savings-Improved-Designs/4000008161 Crash Prediction for Marine Engine Systems at 2008 Abaqus Users Conference Available by searching conference archives for Mercury Marine: http://www.simulia.com/events/search-ucp.html