Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Parallel Finite Element Method using GeoFEM/HPC-MW Kengo Nakajima Dept. Earth & Planetary Science The University of Tokyo VECPAR’06 Tutorial:

Similar presentations


Presentation on theme: "Introduction to Parallel Finite Element Method using GeoFEM/HPC-MW Kengo Nakajima Dept. Earth & Planetary Science The University of Tokyo VECPAR’06 Tutorial:"— Presentation transcript:

1 Introduction to Parallel Finite Element Method using GeoFEM/HPC-MW Kengo Nakajima Dept. Earth & Planetary Science The University of Tokyo VECPAR’06 Tutorial: An Introduction to Robust and High Performance Software Libraries for Solving Common Problems in Computational Sciences July 13th, 2006, Rio de Janeiro, Brazil.

2 VECPAR06-KN 2 Overview Introduction Finite Element Method Iterative Solvers Parallel FEM Procedures in GeoFEM/HPC-MW Local Data Structure in GeoFEM/HPC-MW Partitioning Parallel Iterative Solvers in GeoFEM/HPC-MW Performance of Iterative Solvers Parallel Visualization in GeoFEM/HPC-MW Example of Parallel Code using HPC-MW

3 VECPAR06-KN 3 Finite-Element Method (FEM) One of the most popular numerical methods for solving PDE’s. elements (meshes) & nodes (vertices) Consider the following 2D heat transfer problem: 16 nodes, 9 bi-linear elements uniform thermal conductivity ( =1) uniform volume heat flux (Q=1) T=0 at node 1 Insulated boundaries 1 1 23 456 789 234 5678 9101112 13141516

4 VECPAR06-KN 4 Galerkin FEM procedures Apply Galerkin procedures to each element: 123 456 789 234 5678 9101112 13141516 where {  } : T at each vertex [N] :Shape function (Interpolation function) 1 Introduce the following “weak form” of original PDE using Green’s theorem: in each elem.

5 VECPAR06-KN 5 Element Matrix Apply the integration to each element and form “element” matrix. e B DC A

6 VECPAR06-KN 6 Global (Overall) Matrix Accumulate each element matrix to “global” matrix. 1 1 23 456 789 234 5678 9101112 13141516

7 VECPAR06-KN 7 To each node … Effect of surrounding elem’s/nodes are accumulated. 1 1 23 4 56 789 234 5 6 7 8 9 101112 13141516

8 VECPAR06-KN 8 Solve the obtained global/overall equations under certain boundary conditions (   =0 in this case)

9 VECPAR06-KN 9 Result …

10 VECPAR06-KN 10 Features of FEM applications Typical Procedures for FEM Computations Input/Output Matrix Assembling Linear Solvers for Large-scale Sparse Matrices Most of the computation time is spent for matrix assembling/formation and solving linear equations. HUGE “indirect” accesses memory intensive Local “element-by-element” operations sparse coefficient matrices suitable for parallel computing Excellent modularity of each procedure

11 VECPAR06-KN 11 Introduction Finite Element Method Iterative Solvers Parallel FEM Procedures in GeoFEM/HPC-MW Local Data Structure in GeoFEM/HPC-MW Partitioning Parallel Iterative Solvers in GeoFEM/HPC-MW Performance of Iterative Solvers Parallel Visualization in GeoFEM/HPC-MW Example of Parallel Code using HPC-MW

12 VECPAR06-KN 12 Goal of GeoFEM/HPC-MW as Environment for Development of Parallel FEM Applications NO MPI call’s in user’s code !!!!! As serial as possible !!!!! Original FEM code developed for single CPU machine can work on parallel computers with smallest modification. Careful design of the local data structure for distributed parallel computing is very important.

13 VECPAR06-KN 13 to solve larger problems faster... –finer meshes provide more accurate solution What is Parallel Computing ? Homogeneous/Heterogeneous Porous Media Lawrence Livermore National Laboratory HomogeneousHeterogeneous very fine meshes are required for simulations of heterogeneous field.

14 VECPAR06-KN 14 PC with 1GB memory : 1M meshes are the limit for FEM −Southwest Japan with (1000km) 3 in 1km mesh -> 10 9 meshes Large Data -> Domain Decomposition -> Local Operation Inter-Domain Communication for Global Operation. Large-Scale Data Local Data Local Data Local Data Local Data Local Data Local Data Local Data Local Data Communication Partitioning What is Parallel Computing ? (cont.)

15 VECPAR06-KN 15 Parallel Computing -> Local Operations Communications are required in Global Operations for Consistency. What is Communication ?

16 VECPAR06-KN 16 Parallel Computing in GeoFEM/HPCMW Algorithms: Parallel Iterative Solvers & Local Data Structure Parallel Iterative Solvers by (Fortran90+MPI) –Iterative method is the only choice for large-scale problems with parallel processing. –Portability is important -> from PC clusters to Earth Simulator Appropriate Local Data Structure for (FEM+Parallel Iterative Method) –FEM is based on local operations.

17 VECPAR06-KN 17 Large Scale Data -> partitioned into Distributed Local Data Sets. Local Data FEM Code FEM code on each PE assembles coefficient matrix for each local data set : this part is completely local, same as serial operations Linear Solver Global Operations & Communications happen only in Linear Solvers dot products, matrix-vector multiply, preconditioning Parallel Computing in FEM SPMD: Single-Program Multiple-DataMPIMPI MPI

18 VECPAR06-KN 18 Parallel Computing in GeoFEM/HPC-MW Finally, users can develop parallel FEM codes easily using GeoFEM/HPC-MW without considering parallel operations. Local data structure and linear solvers do it. Basically, same procedures as those of serial operations. This is possible because FEM is based on local operations. FEM is really suitable for parallel computing. NO MPI in user’s code Plug-in

19 VECPAR06-KN 19 Plug-in in GeoFEM

20 VECPAR06-KN 20 Plug-in in HPC-MW Vis. Linear Solver Matrix Assemble I/O HPC-MW for Earth Simulator FEM code developed on PC I/F for Vis. I/F for Solvers I/F for Mat.Ass. I/F for I/O Vis. Linear Solver Matrix Assemble I/O HPC-MW for Hitachi SR1100 Vis. Linear Solver Matrix Assemble I/O HPC-MW for Opteron Cluster

21 VECPAR06-KN 21 Plug-in in HPC-MW Vis. Linear Solver Matrix Assemble I/O HPC-MW for Earth Simulator FEM code developed on PC I/F for Vis. I/F for Solvers I/F for Mat.Ass. I/F for I/O

22 VECPAR06-KN 22 Introduction Finite Element Method Iterative Solvers Parallel FEM Procedures in GeoFEM/HPC-MW Local Data Structure in GeoFEM/HPC-MW Partitioning Parallel Iterative Solvers in GeoFEM/HPC-MW Performance of Iterative Solvers Parallel Visualization in GeoFEM/HPC-MW Example of Parallel Code using HPC-MW

23 VECPAR06-KN23 Bi-Linear Square Elements Values are defined on each node 1 5 1 2 6 2 3 7 3 8 4 1 5 1 6 2 3 7 3 8 4 Local information is not enough for matrix assembling. divide into two domains by “node-based” manner, where number of “nodes (vertices)” are balanced. 21 5 1 6 2 3 8 4 7 3 2 6 2 7 3 Information of overlapped elements and connected nodes are required for matrix assembling on boundary nodes.

24 VECPAR06-KN 24 Local Data of GeoFEM/HPC-MW Node-based partitioning for IC/ILU type preconditioning methods Local data includes information for : Nodes originally assigned to the partition/PE Elements which include the nodes : Element-based operations (Matrix Assemble) are allowed for fluid/structure subsystems. All nodes which form the elements but out of the partition Nodes are classified into the following 3 categories from the viewpoint of the message passing Internal nodes originally assigned nodes External nodes in the overlapped elements but out of the partition Boundary nodes external nodes of other partition Communication table between partitions NO global information required except partition-to-partition connectivity

25 VECPAR06-KN 25 Node-based Partitioning internal nodes - elements - external nodes 1234 5 2122232425 16 1718 20 11 121314 15 6 789 10 19 PE#0 PE#1 PE#2 PE#3

26 VECPAR06-KN 26 Elements which include Internal Nodes Node-based Partitioning internal nodes - elements - external nodes 8911 10 1413 15 12 External Nodes included in the Elements in overlapped region among partitions. Partitioned nodes themselves (Internal Nodes) 123 45 67 Info of External Nodes are required for completely local element–based operations on each processor. Info of External Nodes are required for completely local element–based operations on each processor.

27 VECPAR06-KN 27 Elements which include Internal Nodes Node-based Partitioning internal nodes - elements - external nodes 8911 10 1413 15 12 External Nodes included in the Elements in overlapped region among partitions. Partitioned nodes themselves (Internal Nodes) 123 45 67 Info of External Nodes are required for completely local element–based operations on each processor. Info of External Nodes are required for completely local element–based operations on each processor. We do not need communication during matrix assemble !!

28 VECPAR06-KN 28 Parallel Computing in FEM SPMD: Single-Program Multiple-Data Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers MPIMPI MPIMPI MPIMPI

29 VECPAR06-KN 29 Parallel Computing in FEM SPMD: Single-Program Multiple-Data Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers MPIMPI MPIMPI MPIMPI

30 VECPAR06-KN 30 Parallel Computing in FEM SPMD: Single-Program Multiple-Data Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers MPIMPI MPIMPI MPIMPI

31 VECPAR06-KN 31 Parallel Computing in FEM SPMD: Single-Program Multiple-Data Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers MPIMPI MPIMPI MPIMPI

32 VECPAR06-KN 32 Parallel Computing in FEM SPMD: Single-Program Multiple-Data Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers Local Data FEM code Linear Solvers MPIMPI MPIMPI MPIMPI

33 VECPAR06-KN 33 Getting Information for EXTERNAL NODES from EXTERNAL PARTITIONS.Getting Information for EXTERNAL NODES from EXTERNAL PARTITIONS. “Communication tables” in local data structure includes the procedures for communication.“Communication tables” in local data structure includes the procedures for communication. What is Communication ?

34 VECPAR06-KN34 Parallel procedures are required in: Dot products Matrix-vector multiplication How to “Parallelize” Iterative Solvers ? e.g. CG method (with no preconditioning) Compute r (0) = b-[A]x (0) for i= 1, 2, … z (i-1) = r (i-1)  i-1 = r (i-1) z (i-1) if i=1 p (1) = z (0) else  i-1 =  i-1 /  i-2 p (i) = z (i-1) +  i-1 p (i) endif q (i) = [A]p (i)  i =  i-1 /p (i) q (i) x (i) = x (i-1) +  i p (i) r (i) = r (i-1) -  i q (i) check convergence |r| end

35 VECPAR06-KN35 use MPI_ALLreduce after local operations How to “Parallelize” Dot Products RHO= 0.d0 do i= 1, N RHO= RHO + W(i,R)*W(i,Z) enddo Allreduce RHO

36 VECPAR06-KN36 We need values of {p} vector at EXTERNAL nodes BEFORE computation !! How to “Parallelize” Matrix-Vec. Multilplication do i= 1, N q(i)= D(i)*p(i) do k= INDEX_L(i-1)+1, INDEX_U(i) q(i)= q(i) + AMAT_L(k)*p(ITEM_L(k)) enddo do k= INDEX_U(i-1)+1, INDEX_U(i) q(i)= q(i) + AMAT_U(k)*p(ITEM_U(k)) enddo get {p} at EXTERNAL nodes

37 VECPAR06-KN 37 Getting Information for EXTERNAL NODES from EXTERNAL PARTITIONS.Getting Information for EXTERNAL NODES from EXTERNAL PARTITIONS. “Communication tables” in local data structure includes the procedures for communication.“Communication tables” in local data structure includes the procedures for communication. What is Communication ?

38 VECPAR06-KN38 Number of neighbors NEIBTOT Neighboring domains NEIBPE(ip), ip= 1, NEIBPETOT 1D compressed index for “boundary” nodes EXPORT_INDEX(ip), ip= 0, NEIBPETOT Array for “boundary” nodes EXPORT_ITEM(k), k= 1, EXPORT_INDEX(NEIBPETOT) Communication Table: SEND

39 VECPAR06-KN 39 PE-to-PE comm. : SEND PE#2 : send information on “boundary nodes” 7123 1091112 5 68 4 PE#2 123 45678911 10 14131512 PE#0 3 4 8 6 9101212 5117 PE#3 NEIBPE= 2 NEIBPE(1)=3, NEIBPE(2)= 0 EXPORT_INDEX(0)= 0 EXPORT_INDEX(1)= 2 EXPORT_INDEX(2)= 2+3 = 5 EXPORT_ITEM(1-5)=1,4,4,5,6

40 VECPAR06-KN 40 Communication Table : SEND send information on “boundary nodes” neib#1 SENDbuf neib#2neib#3neib#4 export_index(0)+1 BUFlength_e export_index(1)+1export_index(2)+1export_index(3)+1 do neib= 1, NEIBPETOT do k= export_index(neib-1)+1, export_index(neib) kk= export_item(k) SENDbuf(k)= VAL(kk) enddo do neib= 1, NEIBPETOT iS_e= export_index(neib-1) + 1 iE_e= export_index(neib ) BUFlength_e= iE_e + 1 - iS_e call MPI_ISEND & & (SENDbuf(iS_e), BUFlength_e, MPI_INTEGER, NEIBPE(neib), 0,& & MPI_COMM_WORLD, request_send(neib), ierr) enddo call MPI_WAITALL (NEIBPETOT, request_send, stat_recv, ierr) export_index(4)

41 VECPAR06-KN41 Number of neighbors NEIBTOT Neighboring domains NEIBPE(ip), ip= 1, NEIBPETOT 1D compressed index for “external” nodes IMPORT_INDEX(ip), ip= 0, NEIBPETOT Array for “external” nodes IMPORT_ITEM(k), k= 1, IMPORT_INDEX(NEIBPETOT) Communication Table: RECEIVE

42 VECPAR06-KN 42 PE-to-PE comm. : RECEIVE PE#2 : receive information for “external nodes” 7123 1091112 5 68 4 PE# 2 123 45678911 10 14131512 PE#0 3 4 8 6 9101212 5117 PE#3 NEIBPE= 2 NEIBPE(1)=3, NEIBPE(2)= 0 IMPORT_INDEX(0)= 0 IMPORT_INDEX(1)= 3 IMPORT_INDEX(2)= 3+3 = 6 IMPORT_ITEM(1-6)=7,8,10,9,11,12

43 VECPAR06-KN 43 Communication Table : RECV. recv. information for “external nodes” neib#1 RECVbuf neib#2neib#3neib#4 BUFlength_i do neib= 1, NEIBPETOT iS_i= import_index(neib-1) + 1 iE_i= import_index(neib ) BUFlength_i= iE_i + 1 - iS_i call MPI_IRECV & & (RECVbuf(iS_i), BUFlength_i, MPI_INTEGER, NEIBPE(neib), 0,& & MPI_COMM_WORLD, request_recv(neib), ierr) enddo call MPI_WAITALL (NEIBPETOT, request_recv, stat_recv, ierr) do neib= 1, NEIBPETOT do k= import_index(neib-1)+1, import_index(neib) kk= import_item(k) VAL(kk)= RECVbuf(k) enddo import_index(0)+1import_index(1)+1import_index(2)+1import_index(3)+1 import_index(4)

44 VECPAR06-KN44 So far, we have spent several slides for describing the concept of local data structure of GeoFEM/HPC-MW which includes information for inter-domain communications. Actually, users do not need to know the detail of such local data structure. Most of the procedures with communication tables, such as parallel I/O, linear solvers and parallel visualization are executed in subroutines provided by GeoFEM/HPC-MW. What you have to do is just calling these subroutines. Local Data Structure …


Download ppt "Introduction to Parallel Finite Element Method using GeoFEM/HPC-MW Kengo Nakajima Dept. Earth & Planetary Science The University of Tokyo VECPAR’06 Tutorial:"

Similar presentations


Ads by Google