Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using HPC for Ansys CFX and Fluent

Similar presentations


Presentation on theme: "Using HPC for Ansys CFX and Fluent"— Presentation transcript:

1 Using HPC for Ansys CFX and Fluent
John Zaitseff, April 2015 High Performance Computing

2 The problem in computer labs
Multiple computers locked for long periods of time Often just a handful of students All computers running Ansys CFX or Fluent Often randomly rebooted by other students and/or staff Cannot get a computer when you need it Can lose results when you do Image credit: John Zaitseff, UNSW

3 The solution: High Performance Computing
“High performance computing is used to solve real-world problems of significant scale or detail across a diverse range of disciplines including physics, biology, chemistry, geosciences, climate sciences, engineering and many others.” — Intersect Australia Image credit: IBM Blue Gene P supercomputer, Argonne National Laboratory

4 High Performance Computing architecture
Massively Parallel Distributed Computational Clusters Many individual servers (“nodes”): dozens to thousands Multiple processors per node: between 8 and 64 cores Interconnected by fast networks Almost always run Linux In our case: Rocks Linux Distribution on top of CentOS 6.x The Leonardi cluster Image credit: John Zaitseff, UNSW

5 High Performance Computing architecture
Internet Head Node Storage Node Internal Network Switch Chassis 1 Compute Node 1-1 Compute Node 1-2 Compute Node 1-3 Compute Node 1-4 Compute Node 1-n Chassis m Compute Node m-1 Compute Node m-2 Compute Node m-3 Compute Node m-4 Compute Node m-n Compute Node 1 Compute Node 2 Compute Node 3 Compute Node 4 Compute Node n

6 Facilities for MECH students and staff
The Newton cluster For undergraduate students, postgraduates and staff MECH9620, MECH4100, MMAN4010, MMAN4020, AERO4110 and AERO4120 students already have an account! The Trentino cluster For postgraduate students and staff By application The Leonardi cluster UNSW R1 Data Centre Image credit: John Zaitseff, UNSW

7 The Newton cluster: newton.mech.unsw.edu.au
10 × Dell R415 server nodes Head node: newton Compute nodes: newton01 to newton09 160 × AMD Opteron GHz processor cores Two physical processors per node Eight CPU cores per processor Only four floating-point units per processor 320 GB of main memory (32 GB per node) 12 TB of storage: 6 × 3 TB drives in RAID 6 1Gb Ethernet network interconnect The Newton cluster Image credit: John Zaitseff, UNSW

8 The Trentino cluster: trentino.mech.unsw.edu.au
16 × Dell R815 server nodes Head node: trentino Compute nodes: trentino01 to trentino15 1024 × AMD Opteron GHz processor cores Four physical processors per node Sixteen CPU cores per processor Only eight floating-point units per processor 2048 GB of main memory (128 GB per node) 30 TB of storage: 12 × 3 TB drives in RAID 6 4×1Gb Ethernet network interconnect The Trentino cluster Image credit: John Zaitseff, UNSW

9 The Leonardi cluster: leonardi.eng.unsw.edu.au
7 × HP BladeSystem c7000 blade enclosures 1 × HP ProLiant DL385 G7 server: leonardi 56 × HP BL685c G7 compute nodes Compute nodes: ec01b01-ec07b08 2944 × AMD Opteron GHz processor cores and Opteron GHz processor cores Four physical processors per node Twelve or sixteen CPU cores per processor 8448 GB of main memory (96–512 GB per node) 93.5 TB of storage: 70 × 2 TB drives in RAID 6+0 2×10Gb Ethernet network interconnect Nodes in the Leonardi cluster Image credit: John Zaitseff, UNSW

10 The Raijin cluster: raijin.nci.org.au
3592 × Fujitsu blade server nodes Multiple login nodes Multiple management nodes 57,472 Intel Xeon E GHz processors 160 TB of main memory 10 PB of storage using the Lustre distributed file system 56Gb Infiniband FDR network interconnect Image credit: National Computational Infrastructure

11 Connecting to a HPC system
Use the Secure Shell protocol (SSH) Under Linux or Mac OS X: ssh (for example, ssh Under Windows: PuTTY (Start » All Programs » PuTTY » PuTTY) Can install Cygwin: “that Linux feeling under Windows” To connect to the Newton cluster: Hostname: newton.mech.unsw.edu.au Check RSA2 fingerprint: 69:7e:64:75:57:67:ad:4c:21:8e:90:7d:8e:97:70:ce User name: your zID Password: your zPass You will get a command line prompt: something like To exit, type exit and press ENTER. $

12 Simple Linux commands List files in a directory: ls [options] [pathname ...] [ ] indicates optional parameters, ... indicates one or more parameters Italic fixed-width font indicates replaceable parameters Options include “-l” (letter L) for a long (detailed) listing To show the current directory: pwd To change directories: cd directory ~ is the home directory . is the current directory .. is the directory above the current one ~user is the home directory of user user Subdirectories are separated by “/”, e.g., /home/z /src To create directories: mkdir directory To remove an empty directory: rmdir directory To get help for a command: man command

13 More simple Linux commands
To output one or more file’s contents: cat filename ... To view one or more files page by page: less filename ... To copy one file: cp source destination To copy one or more files to a directory: cp filename ... dir To preserve the “last modified” time-stamp: cp -p To copy recursively: cp -pr source destination To move one or more files to a different directory: mv filename ... dir To rename a file or directory: mv oldname newname To remove files: rm filename ... Recommendation: use “ls filename ...” before rm or mv: what happens if you accidentally type “rm *”? or “rm * .c”? (note the space!)

14 Transferring files To copy files to a Linux or Mac OS X system: use scp, rsync or insync To copy files to and from a Windows machine: use WinSCP (Start » All Programs » WinSCP » WinSCP), or scp or rsync under Cygwin To copy files to and from the Newton cluster: Host name newton.mech.unsw.edu.au Check RSA2 fingerprint: 69:7e:64:75:57:67:ad:4c:21:8e:90:7d:8e:97:70:ce User name: your zID Password: your zPass Using WinSCP, simply drag and drop files from one pane to the other.

15 Editing files Use an editor to edit text files
Many choices, leading to “religious wars”! Some options: GNU Emacs, Vim, Nano Nano is very simple to use: nano filename CTRL-X to exit (you will be asked to save any changes) GNU Emacs and Vim are highly customisable and programmable For example, see the file ~z /.emacs Debra Cameron et al., Learning GNU Emacs, 3rd Edition, O’Reilly Media, December ISBN , Arnold Robbins et al., Learning the vi and Vim Editors, 7th Edition, O’Reilly Media, July ISBN ,

16 Running Ansys CFX jobs Set up your job using Ansys CFX as per normal
Connect to the Newton cluster using PuTTY Create a directory for this particular job Transfer the .cfx and .def files to that directory using WinSCP Create an appropriate script file Submit the job to the Newton queue Periodically check the status of the job Once finished, transfer the .out and .res files to your desktop computer Check the results using the standard Ansys CFX tools Image credit: The Ansys Blog at

17 Steps 1 to 4: Setting up the job
Set up your job using Ansys CFX as per normal May use the laboratory computers to do this Connect to the Newton cluster using PuTTY Connect to newton.mech.unsw.edu.au Create a directory for this particular job Use the mkdir directory command Come up with a consistent naming scheme Structure your directories; use subdirectories as required Transfer the .cfx and .def files to that directory using WinSCP Connect to newton.mech.unsw.edu.au as before

18 Step 5: Create a script file
Change to the newly-created directory: cd directory Invoke the text editor to create a script file: nano filename.sh Add the following text, replacing parameters as required: #!/bin/bash #SBATCH --time=0-12:00: # for 0 days 12 hours #SBATCH --mem= # 30GB memory #SBATCH --ntasks= # A single job #SBATCH --cpus-per-task= # 16 processor cores #SBATCH # #SBATCH --mail-type=ALL cd $SLURM_SUBMIT_DIR module load cfx/ # or cfx/14.5 as appropriate cfx5solve -batch -def filename.def -part 16 \ -start-method "Platform MPI Local Parallel" Save the file by pressing CTRL-X and following the prompts

19 Steps 6 to 7: Submit and check on the job
Once you have created the filename.sh script file, submit it into the Newton queue: Make sure you are in the correct directory Submit the job: sbatch filename.sh Take note of the job number: “Submitted batch job jobid” Once submitted, you do not need to be connected to the cluster Periodically check on the job status The job will start as soon as resources are available for it to run s will be sent to you on job start and completion Show queue status: squeue or squeue -l (letter L) Show node status: sinfo Cancel a running or queued job: scancel jobid

20 Running Ansys Fluent jobs
Similar to running CFX jobs on the cluster Different files need to be transferred to and from the cluster Script file is also slightly different: #!/bin/bash #SBATCH --time=0-12:00: # for 0 days 12 hours #SBATCH --mem= # 30GB memory #SBATCH --ntasks= # A single job #SBATCH --cpus-per-task= # 16 processor cores #SBATCH # #SBATCH --mail-type=ALL cd $SLURM_SUBMIT_DIR module load fluent/ # or fluent/14.5 as appropriate fluent 3d -g -t16 -ssh <inputfilename.txt >outputfilename.txt # may replace “3d” with “2d” for two-dimensional meshes

21 Getting help with HPC John Zaitseff Whom to ask for help?
Your colleagues Your supervisor/lecturer The HPC representative John Zaitseff Available for consultations on Tuesdays 9:30am–4pm by appointment only. Image credit: John Zaitseff, UNSW


Download ppt "Using HPC for Ansys CFX and Fluent"

Similar presentations


Ads by Google