Download presentation

Presentation is loading. Please wait.

Published bySpencer Everton Modified over 2 years ago

1
1 Scaling Up Data Intensive Scientific Applications to Campus Grids Douglas Thain University of Notre Dame LSAP Workshop Munich, June 2009

2
2 Overview Challenges in Using Campus Grids Solution: Abstractions Examples and Applications –All-Pairs: Biometrics, Data Mining –Wavefront: Genomics and Economics –Some-Pairs: Genomics Abstractions, Workflows, and Languages

3
3 What is a Campus Grid? A campus grid is an aggregation of all available computing power found in an institution: –Idle cycles from desktop machines. –Unused cycles from dedicated clusters. Examples of campus grids: –600 CPUs at the University of Notre Dame –2000 CPUs at the University of Wisconsin –13,000 CPUs at Purdue University Cluster, cloud grid are all similar concepts.

4
4

5
5

6
6

7
7 Campus grids can give us access to more machines than we can possibly use. But are they easy to use?

8
8 Example: Biometrics Research Goal: Design robust face comparison function. F 0.05 F 0.97

9
9 Similarity Matrix Construction 1.00.80.10.0 0.1 1.00.00.1 0.0 1.00.00.10.3 1.00.0 1.00.1 1.0 Challenge Workload: 60,000 iris images 1MB each.02s per F 833 CPU-days 600 TB of I/O

10
10 I have 60,000 iris images acquired in my research lab. I want to reduce each one to a feature space, and then compare all of them to each other. I want to spend my time doing science, not struggling with computers. I have a laptop. I own a few machines. We have access to a campus grid. What should I do?

11
11 We said: Try using our campus grid! (How hard could it be?)

12
12 Non-Expert User Using 500 CPUs Try 1: Each F is a batch job. Failure: Dispatch latency >> F runtime. HN CPU FFFF F Try 2: Each row is a batch job. Failure: Too many small ops on FS. HN CPU FFFF F F F F F F F F F F F F F F F F Try 3: Bundle all files into one package. Failure: Everyone loads 1GB at once. HN CPU FFFF F F F F F F F F F F F F F F F F Try 4: User gives up and attempts to solve an easier or smaller problem.

13
13 Why are Grids Hard to Use? System Properties: –Wildly varying resource availability. –Heterogeneous resources. –Unpredictable preemption. –Unexpected resource limits. User Considerations: –Jobs can’t run for too long... but, they can’t run too quickly, either! –I/O operations must be carefully matched to the capacity of clients, servers, and networks. –Users often do not even have access to the necessary information to make good choices!

14
14 Overview Challenges in Using Campus Grids Solution: Abstractions Examples and Applications –All-Pairs: Biometrics, Data Mining –Wavefront: Genomics and Economics –Some-Pairs: Genomics Abstractions, Workflows, and Languages

15
15 Observation In a given field of study, many people repeat the same of work many times, making slight changes to the data and algorithms. In a given field of study, many people repeat the same pattern of work many times, making slight changes to the data and algorithms. If the system knows the overall pattern in advance, then it can do a better job of executing it reliably and efficiently. If the user knows in advance what patterns are allowed, then they have a better idea of how to construct their workloads.

16
16 What’s the Most Successful Parallel Programming Language? OpenGL: –A declarative specification of a workload. –Ported to a wide variety of HW over 20 years. –The graphics pipeline is very specific: Transform points to coordinate space. Connect polygons to transformed points. Stretch textures across polygons. Sort everything by Z-depth. –Can we apply the same idea to grids?

17
17 Abstractions for Distributed Computing Abstraction: a declarative specification of the computation and data of a workload. A restricted pattern, not meant to be a general purpose programming language. Uses instead of files. Uses data structures instead of files. Provide users with a. Provide users with a bright path. Regular structure makes it tractable to model and predict performance.

18
18 Abstractions as Higher-Order Functions AllPairs( set A, set B, function F ) –returns M[i,j] = F( A[i], B[j] ) SomePairs( set A, list(i,j) L, function F ) –returns list of F( A[i], A[j] ) Wavefront( matrix R, function F ) –returns R[i,j] = F( R[i-1,j], R[I,j-1] )

19
19 Working with Abstractions F A1 A2 An AllPairs( A, B, F ) Campus Grid A1 A2 Bn Custom Workflow Engine Compact Data Structure

20
20 Overview Challenges in Using Campus Grids Solution: Abstractions Examples and Applications –All-Pairs: Biometrics, Data Mining –Wavefront: Genomics and Economics –Some-Pairs: Genomics Abstractions, Workflows, and Languages

21
21 All-Pairs Abstraction AllPairs( set A, set B, function F ) returns matrix M where M[i][j] = F( A[i], B[j] ) for all i,j B1 B2 B3 A1A2A3 FFF A1 An B1 Bn F AllPairs(A,B,F) F FF FF F allpairs A B F.exe

22
22 How Does the Abstraction Help? The custom workflow engine: –Chooses right data transfer strategy. –Chooses the right number of resources. –Chooses blocking of functions into jobs. –Recovers from a larger number of failures. –Predicts overall runtime accurately. All of these tasks are nearly impossible for arbitrary workloads, but are tractable (not trivial) to solve for a specific abstraction.

23
23

24
24 Distribute Data Via Spanning Tree

25
25 Choose the Right # of CPUs

26
26 Conventional vs Abstraction

27
27 All-Pairs in Production Our All-Pairs implementation has provided over 57 CPU-years of computation to the ND biometrics research group over the last year. Largest run so far: 58,396 irises from the Face Recognition Grand Challenge. The largest experiment ever run on publically available data. Competing biometric research relies on samples of 100-1000 images, which can miss important population effects. Reduced computation time from 833 days to 10 days, making it feasible to repeat multiple times for a graduate thesis. (We can go faster yet.)

28
28

29
29 Overview Challenges in Using Campus Grids Solution: Abstractions Examples and Applications –All-Pairs: Biometrics, Data Mining –Wavefront: Genomics and Economics –Some-Pairs: Genomics Abstractions, Workflows, and Languages

30
30 M[4,2] M[3,2]M[4,3] M[4,4]M[3,4]M[2,4] M[4,0]M[3,0]M[2,0]M[1,0]M[0,0] M[0,1] M[0,2] M[0,3] M[0,4] F x yd F x yd F x yd F x yd F x yd F x yd F F y y x x d d x FF x ydyd Wavefront( matrix M, function F(x,y,d) ) returns matrix M such that M[i,j] = F( M[i-1,j], M[I,j-1], M[i-1,j-1] ) F Wavefront(M,F) M

31
31 Applications of Wavefront Bioinformatics: –Compute the alignment of two large DNA strings in order to find similarities between species. Existing tools do not scale up to complete DNA strings. Economics: –Simulate the interaction between two competing firms, each of which has an effect on resource consumption and market price. E.g. When will we run out of oil? Applies to any kind of optimization problem solvable with dynamic programming.

32
32 Problem: Dispatch Latency Even with an infinite number of CPUs, dispatch latency controls the total execution time: O(n) in the best case. However, job dispatch latency in an unloaded grid is about 30 seconds, which may outweigh the runtime of F. Things get worse when queues are long! Solution: Build a lightweight task dispatch system. (Idea from Falkon@UC)

33
33 worker work queue F In.txtout.txt put F.exe put in.txt exec F.exe out.txt get out.txt 1000s of workers dispatched via Condor/SGE/SSH wavefront engine queue tasks done

34
34 Problem: Performance Variation Tasks can be delayed for many reasons: –Heterogeneous hardware. –Interference with disk/network. –Policy based suspension. Any delayed task in Wavefront has a cascading effect on the rest of the workload. Solution - Fast Abort: Keep statistics on task runtimes, and abort those that lie significantly outside the mean. Prefer to assign jobs to machines with a fast history.

35
35 500x500 Wavefront on ~200 CPUs

36
36 Wavefront on a 200-CPU Cluster

37
37 Wavefront on a 32-Core CPU

38
38 Performance Prediction is Possible Often, users have no idea whether a task will take one day or one year -> better to find out at the beginning! Allows the system to choose automatically whether to run locally or on the campus grid. Of course, performance prediction is technically and philosophically dangerous: we simply argue that abstractions are more predictable than general programs.

39
39 Overview Challenges in Using Campus Grids Solution: Abstractions Examples and Applications –All-Pairs: Biometrics, Data Mining –Wavefront: Genomics and Economics –Some-Pairs: Genomics Abstractions, Workflows, and Languages

40
40 The Genome Assembly Problem AGTCGATCGATCGATAATCGATCCTAGCTAGCTACGA AGTCGATCGATCGAT AGCTAGCTACGA TCGATAATCGATCCTAGCTA Chemical Sequencing Computational Assembly AGTCGATCGATCGAT AGCTAGCTACGA TCGATAATCGATCCTAGCTA Millions of “reads” 100s bytes long.

41
41 Sample Genomes ReadsDataPairsSequentialTime A. gambiae scaffold101K80MB738K 12 hours A. gambiae complete180K1.4GB12M 6 days S. Bicolor simulated7.9M5.7GB84M 30 days

42
42 Genome Assembly Today Several commercial firms provide an assembly service that takes weeks on a dedicated cluster, costs O($10K), and is based on human genome heuristics. Genome researchers would like to be able to perform custom assemblies using their own data and heuristics. Can this be done on a campus grid?

43
43 Some-Pairs Abstraction SomePairs( set A, list (i,j), function F(x,y) ) returns list of F( A[i], A[j] ) A1 A2 A3 A1A2A3 F A1 An F SomePairs(A,L,F) FF F (1,2) (2,1) (2,3) (3,3)

44
44 worker work queue in.txtout.txt put align.exe put in.txt exec F.exe out.txt get out.txt 100s of workers dispatched to Notre Dame, Purdue, and Wisconsin somepairs master queue tasks done F detail of a single worker: Distributed Genome Assembly A1 An F (1,2) (2,1) (2,3) (3,3)

45
45 Small Genome (101K reads)

46
46 Medium Genome (180K reads)

47
47 Large Genome (7.9M)

48
48 From Workstation to Grid

49
49 What’s the Upshot? We can do full-scale assemblies as a routine matter on existing conventional machines. Our solution is faster (wall-clock time) than the next faster assembler run on 1024x BG/L. You could almost certainly do better with a dedicated cluster and a fast interconnect, but such systems are not universally available. Our solution opens up research in assembly to labs with “NASCAR” instead of “Formula-One” hardware.

50
50 Overview Challenges in Using Campus Grids Solution: Abstractions Examples and Applications –All-Pairs: Biometrics, Data Mining –Wavefront: Genomics and Economics –Some-Pairs: Genomics Abstractions, Workflows, and Languages

51
51 Other Abstractions for Computing Directed Graph Bag of Tasks Map-Reduce M R R

52
52 Partial Lattice of Abstractions Directed Graph Bag of Tasks WavefrontMap Reduce Some Pairs All-Pairs Lambda Calculus Robust Performance Expressive Power

53
53 Two Abstractions Compared AllPairs( set A, set B, F(x,y) ) SomePairs( set S, list L, F(x,y) ) Assuming that A = B = S… Can you express AllPairs using SomePairs? –Yes, but you must enumerate all pairs explicitly. It is not trivial for SomePairs to minimize the amount of data transferred to each node. Can you express SomePairs using AllPairs? –Yes, but only by doing excessive amounts of work, and then winnowing the results.

54
54 Abstractions as a Social Tool Collaboration with outside groups is how we encounter the most interesting, challenging, and important problems, in computer science. However, often neither side understands which details are essential or non-essential: –Can you deal with files that have upper case letters? –Oh, by the way, we have 10TB of input, is that ok? –(A little bit of an exaggeration.) An abstraction is an excellent chalkboard tool: –Accessible to anyone with a little bit of mathematics. –Makes it easy to see what must be plugged in. –Forces out essential details: data size, execution time.

55
55 Conclusion Grids, clouds, and clusters provide enormous computing power, but are very challenging to use effectively. An abstraction provides a robust, scalable solution to a narrow category of problems; each requires different kinds of optimizations. Limiting expressive power, results in systems that are usable, predictable, and reliable. Is there a menu of abstractions that would satisfy many consumers of grid computing?

56
56 Acknowledgments Cooperative Computing Lab –http://www.cse.nd.edu/~ccl http://www.cse.nd.edu/~ccl Grad Students –Chris Moretti –Hoang Bui –Li Yu –Mike Olson –Michael Albrecht Faculty: –Patrick Flynn –Nitesh Chawla –Kenneth Judd –Scott Emrich NSF Grants CCF-0621434, CNS-0643229 Undergrads –Mike Kelly –Rory Carmichael –Mark Pasquier –Christopher Lyon –Jared Bulosan

Similar presentations

OK

Cooperative Computing for Data Intensive Science Douglas Thain University of Notre Dame NSF Bridges to Engineering 2020 Conference 12 March 2008.

Cooperative Computing for Data Intensive Science Douglas Thain University of Notre Dame NSF Bridges to Engineering 2020 Conference 12 March 2008.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on cash flow statement analysis Ppt on number system Ppt on schottky diode bridge Ppt on bmc remedy it service Ppt on world television day 2016 Maths ppt on rational numbers for class 8 Free ppt on types of clouds Ppt on area related to circle of class 10 Ppt on earth dam Ppt on project management system