Presentation is loading. Please wait.

Presentation is loading. Please wait.

Efficiently Sharing Common Data HTCondor Week 2015 Zach Miller Center for High Throughput Computing Department of Computer Sciences.

Similar presentations


Presentation on theme: "Efficiently Sharing Common Data HTCondor Week 2015 Zach Miller Center for High Throughput Computing Department of Computer Sciences."— Presentation transcript:

1 Efficiently Sharing Common Data HTCondor Week 2015 Zach Miller (zmiller@cs.wisc.edu) Center for High Throughput Computing Department of Computer Sciences University of Wisconsin-Madison

2 › Input files are never reused  From one job to the next  Multiple slots on same machine › Input files are transferred serially from the machine where the job was submitted › This results in the submit machine often transferring multiple copies of the same file simultaneously (bad!), sometimes to the same machine (even worse!). “Problems” with HTCondor 2

3 › Enter the HTCache! › Runs on the execute machine › Runs under the condor_master just like any other daemon › One daemon serves all users of that machine › Runs with same privilege as the startd HTCache 3

4 › Cache is on disk › Persists across restarts › Configurable size › Configurable cache replacement policy HTCache 4

5 › The cache is shared › All slots use same local cache › Even if user is different (data is data!) › Thus, the HTCache needs the ability to write files into a job’s sandbox as the user that will run the job HTCache 5

6 › Instead of fetching files from the shadow, the job instructs the HTCache to put specific files into the sandbox › If the file is in the cache, the HTCache COPIES the file into the sandbox › Each slot gets its own copy, in case the job decides to modify it. (As opposed to hard or soft-linking the file into the sandbox) Preparing Job Sandbox 6

7 › If the file is not in the cache, the HTCache fetches the file directly into the sandbox and then possibly adds it to the cache › Wait… possibly? Preparing Job Sandbox 7

8 › Yes, possibly. › Obvious case: File is larger than cache › Larger question: which files are the best to keep? › Cache policy is one of those things where it is rarely a “one solution works best in all cases” Cache Policy 8

9 › There are 10 problems in Computer Science:  Caching  Levels of Indirection  Off-by-one errors › Allow flexible caching by adding a level of indirection. Don’t use size, time, etc., but rather the “value” of a file. Cache Policy 9

10 › How do we determine the value? › Another trick: punt to the admin! › The cache policy is implemented as a plugin, using a dynamically loaded library: double valuationFun (long size, long age, int stickiness, long uses, long bytes_seeded, long time_since_seed) { return (stickiness – age) * size; } Cache Policy 10

11 › The plugin determines the “value” of a file using the input parameters:  File size  Time file entered cache  Time last accessed  Number of hits  “Stickiness” (This is a hint provided by the submit node… more on that later) Cache Policy 11

12 › When deciding whether or not to cache a file, the HTCache considers all files currently in the cache, plus the file under consideration › Computes the “value” of each file › Finds the “maximum value cache” that fits in the allocated size › May or may NOT include the file just fetched Cache Policy 12

13 › There is a submit-side component as well, although it has a slightly different role  Does not have a dedicated disk cache  Instead, serves all files requested by jobs  Periodically scans the queue, counts the number of jobs that use each input file, and broadcasts this “stickiness” value to all HTCache daemons Submit Node HTCache 13

14 › Suppose I have a cluster of 25 eight-core machines › I have a 1GB input file common to all my jobs (a common scenario for say, BLAST) › I submit 1000 jobs › Old way: Each time a job starts up it transfers the 1GB file to the sandbox (1TB) Example 14

15 › New way: Each of the 25 machines gets the file once, shares it among all 8 slots, and it persists across jobs › Naïve calculation: 25GB transfer (as opposed to 1TB). › Of course, this ignores competition for the cache. Example 15

16 › This is where “stickiness” helps › If I submit a separate batch of 50 jobs using a different 1GB input, the HTCache can look at the stickiness and decide not to evict the first 1GB file since 1000 jobs are scheduled to use it is opposed to 50 › It’s possible to write a cache policy tailored to your cluster’s particular workload Example 16

17 › This already has huge advantages. › Even if cache does nothing useful and makes all the wrong choices, it can do NO WORSE than the existing method of transferring file every time. › A huge advantage: Multiple slots share same cache! (And this advantage grows as number of cores grows) › Massively reduces network load on Schedd Success! 17

18 HTCache Results 18

19 › Although the load is reduced, the Schedd is still the single source for all input files However… 19

20 › What if there was a way to get the files from somewhere else? › Maybe even bits of the files from multiple different sources? › Peer-to-peer? › We already have an HTCache deployed on all the execute nodes… However… 20

21 BitTorrent 21

22 › The HTCache running on the submit node acts as a SeedServer › It always has all pieces of files that may be read. If you recall, it is not managing a cache, only serving the already existing files in place. › When a job is submitted, input files are then automatically added to the seed server Submit Node w/ BitTorrent 22

23 › The HTCache uses BitTorrent to retreive the file directly into the sandbox first. › Optionally adds the file to its own cache › Thus, BitTorrent is used to transfer files even if they won't end up in the cache Execute Node w/ BitTorrent 23

24 Putting It All Together 24

25 Putting It All Together 25

26 › “GradStudent-ware”  Was done as a class project  Doesn’t yet meet the exceedingly high standards for committing into our main code repository. › BitTorrent traffic is completely independent from HTCondor. As such, doesn’t work with the shared_port daemon Project Status 26

27 › Obvious statement of the year: Caching is good! › Runner-up: Using peer-to-peer file transfer can be faster than one-to-many file transfer! › However, the nature of scientific workloads and multi-core machines creates an environment where these are especially advantageous Conclusion 27

28 › Thank you! › Questions? Comments? › Ask now, talk to me at lunch, or email me at zmiller@cs.wisc.edu zmiller@cs.wisc.edu Conclusion 28


Download ppt "Efficiently Sharing Common Data HTCondor Week 2015 Zach Miller Center for High Throughput Computing Department of Computer Sciences."

Similar presentations


Ads by Google