Download presentation

Presentation is loading. Please wait.

Published byDayton Meeler Modified about 1 year ago

1
Parallel Computing in Matlab

2
PCT Parallel Computing Toolbox Offload work from one MATLAB session (the client) to other MATLAB sessions (the workers). Run as many as eight MATLAB workers (R2010b) on your local machine in addition to your MATLAB client session. 2

3
MDCS MATLAB Distributed Computing Server Run as many MATLAB workers on a remote cluster of computers as your licensing allows. Run workers on your client machine if you want to run more than eight local workers (R2010b). 3

4
MDCS installing 4

5
Typical Use Cases Parallel for-Loops –Many iterations –Long iterations Batch Jobs Large Data Sets 5

6
Parfor 6 Parallel for-loop Has the same basic concept with “for”. Parfor body is executed on the MATLAB client and workers. The necessary data on which parfor operates is sent from the client to workers, and the results are sent back to the client and pieced together. MATLAB workers evaluate iterations in no particular order, and independently of each other.

7
Parfor 7 A = zeros(1024, 1); for i = 1:1024 A(i) = sin(i*2*pi/1024); end plot(A) A = zeros(1024, 1); matlabpool open local 4 parfor i = 1:1024 A(i) = sin(i*2*pi/1024); end matlabpool close plot(A) parallelization

8
Timing 8 A = zeros(n, 1); matlabpool open local 8 tic parfor i = 1:n A(i) = sin(i); end toc A = zeros(n, 1); tic for i = 1:n A(i) = sin(i); end toc nforparfor

9
When to Use Parfor? 9 Small number of simple calculations.

10
Classification of Variables 10 broadcast variable sliced input variable loop variable reduction variable sliced output variable temporary variable

11
More Notes 11 d = 0; i = 0; parfor i = 1:4 b = i; d = i*2; A(i)= d; end A[2,4,6,8] d8 i4 b4 A d0 i0 b/ d = 0; i = 0; for i = 1:4 b = i; d = i*2; A(i)= d; end

12
More Notes 12 C = 0; for i = 1:m for j = i:n C = C + i * j; end How to parallelize ?

13
Parfor: Estimating an Integral 13

14
Parfor: Estimating an Integral function q = quad_fun( m, n, x1, x2, y1, y2 ) q = 0.0; u = (x2 - x1)/m; v = (y2 - y1)/n; for i = 1:m x = x1 + u * i; for j = 1:n y = y1 + v * j; fx = x^2 + y^2; q = q + u * v * fx; end 14

15
Parfor: Estimating an Integral Computation complexity: O(m*n) Each iteration is independent of other iterations. We can replace “for” with “parfor”, for either loop index i or loop index j. 15

16
Parfor: Estimating an Integral (m, n) (100, 100) (1000, 1000) (10000, 10000) (100000, ) tic A = quad_fun(m,n,0,3,0,3); toc 16 function q = quad_fun( m, n, x1, x2, y1, y2 ) q = 0.0; u = (x2 - x1)/m; v = (y2 - y1)/n; parfor i = 1:m x = x1 + u * i; for j = 1:n y = y1 + v * j; fx = x^2 + y^2; q = q + u * v * fx; end

17
Parfor: Estimating an Integral 17 (m, n) (100, 100) (1000, 1000) (10000, 10000) (100000, ) function q = quad_fun( m, n, x1, x2, y1, y2 ) q = 0.0; u = (x2 - x1)/m; v = (y2 - y1)/n; for i = 1:m x = x1 + u * i; parfor j = 1:n y = y1 + v * j; fx = x^2 + y^2; q = q + u * v * fx; end tic A = quad_fun(m,n,0,3,0,3); toc

18
SPMD SPMD: Single Program Multiple Data. –SPMD command is like a very simplified version of MPI. –The spmd statement lets you define a block of code to run simultaneously on multiple labs, each lab can have different, unique data for that code. –Labs can communicate directly via messages, they meet at synchronization points. –The client program can examine or modify data on any lab. 18

19
SPMD Statement 19

20
SPMD Statement 20

21
SPMD MATLAB sets up the requested number of labs, each with a copy of the program. Each lab “knows" it's a lab, and has access to two special functions: –numlabs(), the number of labs; –labindex(), a unique identifier between 1 and numlabs(). 21

22
SPMD 22

23
Distributed Arrays Distributed() –You can create a distributed array in the MATLAB client, and its data is stored on the labs of the open MATLAB pool. A distributed array is distributed in one dimension, along the last nonsingleton dimension, and as evenly as possible along that dimension among the labs. You cannot control the details of distribution when creating a distributed array. 23

24
Distributed Arrays Codistributed() –You can create a codistributed array by executing on the labs themselves, either inside an spmd statement, in pmode, or inside a parallel job. When creating a codistributed array, you can control all aspects of distribution, including dimensions and partitions. 24

25
Distributed Arrays Codistributed() –You can create a codistributed array by executing on the labs themselves, either inside an spmd statement, in pmode, or inside a parallel job. When creating a codistributed array, you can control all aspects of distribution, including dimensions and partitions. 25

26
Example: Trapezoid 26

27
Example: Trapezoid To simplify things, we assume interval is [0, 1], and we'll let each lab define a and b to mean the ends of its subinterval. If we have 4 labs, then lab number 3 will be assigned [ ½, ¾]. 27

28
Example: Trapezoid 28

29
Pmode pmode lets you work interactively with a parallel job running simultaneously on several labs. Commands you type at the pmode prompt in the Parallel Command Window are executed on all labs at the same time. Each lab executes the commands in its own workspace on its own variables. 29 pmodespmd Parallel computing synchronously Each lab has a desktopNo desktop for labs Can’t freely interleave serial and parallel work Can freely interleave serial and parallel work

30
Pmode 30

31
Pmode labindex() and numlabs() still work; Variables only have the same name, they are independent of each other. 31

32
Pmode Aggregate the array segments into a coherent array. 32 codist = codistributor1d(2, [ ], [3 8]) whole = codistributed.build(segment, codist)

33
Pmode Aggregate the array segments into a coherent array. 33 whole = whole section = getLocalPart(whole)

34
Pmode Aggregate the array segments into a coherent array 34 combined = gather(whole)

35
Pmode How to change distribution? 35 distobj = codistributor1d() I = eye(6, distobj) getLocalPart(I) distobj = codistributor1d(1); I = redistribute(I, distobj) getLocalPart(I)

36
GPU Computing Capabilities –Transferring data between the MATLAB workspace and the GPU –Evaluating built-in functions on the GPU –Running MATLAB code on the GPU –Creating kernels from PTX files for execution on the GPU –Choosing one of multiple GPU cards to use Requirements –NVIDIA CUDA-enabled device with compute capability of 1.3 or greater –NVIDIA CUDA device driver 3.1 or greater –NVIDIA CUDA Toolkit 3.1 (recommended) for compiling PTX files 36

37
GPU Computing Transferring data between workspace and GPU Creating GPU data 37 N = 6; M = magic(N); G = gpuArray(M); M2 = gather(G);

38
GPU Computing Executing code on the GPU –You can transfer or create data on the GPU, and use the resulting GPUArray as input to enhanced built-in functions that support them. –You can run your own MATLAB function file on a GPU. If any of arg1 and arg2 is a GPUArray, the function executes on the GPU and return a GPUArray If none of the input arguments is GPUArray, then arrayfun executes in CPU. Only element-wise operations are supported. 38 result = arg1, arg2);

39
Review 39 What is the typical use cases of parallel Matlab? When to use parfor? What’s the difference between worker(parfor) and lab(spmd)? What’s the difference between spmd and pmode? How to build distributed array? How to use GPU for Matlab parallel computing?

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google