Download presentation
Presentation is loading. Please wait.
Published byKerry Griffin Modified over 8 years ago
1
Parallel Algorithms & Distributed Computing Matt Stimmel Matt White
2
The Goal: 1 Gigapixel
3
How?
5
The Plan Imhotep, How are we going to make all these bricks?
6
The Plan We’ll use the Hebrew Slaves!
7
The Process Hebrew Slave + Benford 6100 Brick Mold = Brick
8
Re-evaluation Yeah, this is pretty slow.
9
Re-evaluation We’ll use MORE Hebrew Slaves!
10
The Revised Process += += += += += += Lots of Hebrew Slaves Lots of Benford 6100 Brick Molds Enough bricks for a pyramid!
11
Thus Parallel Algorithms were born
12
In computers Job Result Process
13
Faster! Job Sub-task Sub-result Result Process
14
What happens here? Job Sub-task Sub-result Result Process
15
Ah Ha! Job Sub-task Sub-result Result SplitCombine Process
16
Sequential! Job Sub-task Sub-result Result SplitCombine Process
17
So, Really, we have… Job Result SplitCombine Sub-task Sub-result Process SequentialParallelSequential
18
Amdahl's Law Speedup = 1 Sequential Parallel (1 – Parallel) +
19
Amdahl's Law Speedup = 1 Sequential Parallel (1 – Parallel) + Or, as a 5 th grader would say, “How many times faster it goes”
20
Amdahl's Law http://en.wikipedia.org/wiki/File:AmdahlsLaw.svg
21
Why Distributed Computing? Process = = = = -
22
The Plan Imhotep, we can’t afford all these Hebrew Slaves!
23
The Plan Well the surrounding countries have slaves they’re not using…
24
The Revised Process: Cheap += += += += += Slaves from Egypt Egypt’s Benford 6100 Brick Molds Enough bricks for a pyramid! Slaves from MarsSpace Brickmakers Slaves from ChickenLand Avian Brickmakers Robot Slaves Brick Factory
25
Distributed Computing Job Sub-task Sub-result Result SplitCombine Process Internet
26
Why Distributed Computing? Process = = = = Internet - Not My
27
Design
28
The Big Picture Client FractalGen ImageCombiner Storage Control Data Server
29
Functions: Manages Connections to Clients Allows the user to input task parameters. Divides job into sub-tasks Displays information about currently running job Challenges: Job division Must divide into a perfect square number of Jobs Resolution Problems occur when the resolution isn’t an integer multiple of the number of jobs. Networking Treated client connections as a Finite State Machine for file transfers. Didn’t work. About 1 in 100 transfers failed. Server
30
Client Functions: Manages Connection to server Receives server-generated command line for job Executes FractalGen on cue Informs server upon completion Challenges: Maintaining a graceful disconnect Aforementioned file transfer
31
FractalGen Functions: Render the fractal Specified by the command line Save the fractal to the disk Challenges: How do you draw a Mandelbrot fractal anyway? Z = Z 2 + C Command Line Parsing Generating large images: Limit is approx. 8000x8000 due to graphics hardware Shader Implementation Downloaded, then made attempts at optimization. Iteration count Shader Model 3 gives a greater instruction cound allowing form more iterations. Shader Model 2 has greater compatibility, but cannot render as many iterations. Client computer must have a GPU capable of running at least DirectX 9 or else FractalGen will not work.
32
Image Combiner Functions: Merge images back together Challenges: Finding.bmp files Memory Allocation Estimates demanded efficient heap usage A lot of pointers, a lot of pointer math Working with Bitmaps Multiple color modes – how many bits is a pixel? “Upside Down” errors Running over a network
33
Client FractalGen ImageCombiner C#.NET Multithreaded C++ OOP Memory Allocation C#.NET XNA HLSL Shader Model 3.0 C#.NET Multithreaded Server Overview of Technologies Used
34
How is the task divided into separate jobs?
35
Breaking up Jobs: Method A
36
Breaking up Jobs: Method B
37
Overlays
38
Method A: Since B didn’t work
39
We now send a square to each client Client 1Client 2 Client 3Client 4
40
The clients return separate images of the fractal.
41
Run the image combiner.
42
Final Output
43
The Big Picture Client FractalGen ImageCombiner Storage Control Data Server
44
Science
45
Science, Step 1 Using a parallel implementation and multiple processors, we will be able to increase performance over the same implementation on a single processor. Hypothesis :
46
Science, Step 2 Run the system, splitting job into varying numbers of sub-tasks and varying the number of processors available. Experiment :
47
Science, Step 3 Run the system using one client. Control :
48
Playwrighting, Step 1 Major Dramatic Question : Is Sequential Parallel ≥ 1?
49
Data
52
Useful Job Sub-task Sub-result Result SplitCombine Process
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.