Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Application Structure to Handle Failures and Improve Performance in a Migratory File Service John Bent, Douglas Thain, Andrea Arpaci-Dusseau, Remzi.

Similar presentations


Presentation on theme: "Using Application Structure to Handle Failures and Improve Performance in a Migratory File Service John Bent, Douglas Thain, Andrea Arpaci-Dusseau, Remzi."— Presentation transcript:

1 Using Application Structure to Handle Failures and Improve Performance in a Migratory File Service John Bent, Douglas Thain, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau, and Miron Livny WiND and Condor Project 14 April 2003

2 Disclaimer We have a lot of stuff to describe, so hang in there until the end!

3 Outline Data Intensive Applications –Batch and Pipeline Sharing –Example: AMANDA Hawk: A Migratory File Service –Application Structure –System Architecture –Interactions Evaluation –Performance –Failure Philosophizing

4 CPU Bound SETI@Home, Folding@Home, etc... –Excellent application of dist comp. –KB of data, days of CPU time. –Efficient to do tiny I/O on demand. Supporting Systems: –Condor –BOINC –Google Toolbar –Custom software.

5 I/O Bound D-Zero data analysis: –Excellent app for cluster computing. –GB of data, seconds of CPU time. –Efficient to compute whenever data is ready. Supporting Systems: –Fermi SAM –High-throughput document scanning –Custom software.

6 Batch Pipelined Applications c1 data b1 a1 xyz c2 data b2 a2 xyz c3 data b3 a3 xyz data Pipeline Shared Data Batch Width Batch Shared Data Pipeline

7 Example: AMANDA corsika corama mmc amasim NUCNUCCS GLAUBTAR EGSDATA3.3 QGSDATA4 (1 MB) DAT (23 MB) corama.out (26 MB) mmc_input.txt mmc_output.dat (126 MB) amasim_input.dat ice tables (3 files, 3MB) amasim_output.txt (5MB) expt geometry (100s files, 500 MB) corsika_input.txt (4 KB)

8 Computing Evironment Clusters dominate: –Similar configurations. –Fast interconnects. –Single administrative domain. –Underutilized commodity storage. –En masse, quite unreliable. Users wish to harness multiple clusters, but have jobs that are both I/O and CPU intensive.

9 Ugly Solutions “FTP-Net” –User finds remote clusters. –Manually stages data in. –Submits jobs, deals with failures. –Pulls data out. –Lather, rinse, repeat. “Remote I/O” –Submit jobs to a remote batch system. –Let all I/O come back to the archive. –Return in several decades.

10 What We Really Need Access resources outside my domain. –Assemble your own army. Automatic integration of CPU and I/O access. –Forget optimal: save administration costs. –Replacing remote with local always wins. Robustness to failures. –Can’t hire babysitters for New Year’s Eve.

11 Hawk: A Migratory File Service Automatically deploys a “task force” acorss an existing distributed system. Manages applications from a high level, using knowledge of process interactions. Provides dependable performance through peer-to-peer techniques. Understands and reacts to failures using knowledge of the system and workloads.

12 Philsophy of Hawk “In allocating resources, strive to avoid disaster, rather than attempt to obtain an optimum.” - Butler Lampson

13 Why not AFS+Make? Quick answer: –Distributed filesystems provide an unnecessarily strong abstraction that is unacceptably expensive to provide in the wide area. Better answer after we explain what Hawk is and how it works.

14 Outline Data Intensive Applications –Batch and Pipeline Sharing –Example: AMANDA Hawk: A Migratory File Service –Application Structure –System Architecture –Interactions Evaluation –Performance –Failure Philosophizing

15 Workflow Language 1 job a a.sub job b b.sub job c c.sub job d d.sub parent a child c parent b child d ab cd

16 v1 Home Storage mydata v2v3 Workflow Language 2 volume v1 ftp://home/mydata mount v1 a /data mount v1 b /data volume v2 scratch mount v2 a /tmp mount v2 c /tmp volume v3 scratch mount v3 b /tmp mount v3 d /tmp ab cd

17 v1 Home Storage mydata v2v3 Workflow Language 3 extract v2 x ftp://home/out.1 extract v3 x ftp://home/out.2 ab cd x out.1out.2 x

18 Mapping Logical to Physical Abstract Jobs –Physical jobs in a batch system –May run more than once! Logical “scratch” volumes –Temporary containers on a scratch disk. –May be created, replicated, and destroyed. Logical “read” volumes –Striped across cooperative proxy caches. –May be created, cached, and evicted.

19 Node Starting System Match Maker Batch Queue Archive Node PBS Head Node Node Condor Pool Workflow Manager

20 Node Gliding In Match Maker Batch Queue Archive StartD Proxy Master Node StartD Proxy Master StartD Proxy Master PBS Head Node Node Condor Pool StartD Proxy Master StartD Proxy Master StartD Proxy Master Glide-In Job

21 Hawk Architecture StartD Proxy Match Maker Batch Queue Archive Workflow Manager StartD Proxy StartD Proxy Wide Area Caching Coop Cache Coop Cache System Model App Flow Job Agent Job Agent Job Agent

22 I/O Interactions StartD Job Agent Proxy POSIX Library Interface Local Area Network /tmpcontainer://host5/120 /datacache://host5/archive/data Match Maker Batch Queue Archive Workflow Manager Cooperative Block Cache Other Proxies Cont. 119Cont. 120 foo outfile tmpfile barbaz creat(“/tmp/outfile”); open(“/data/d15”);

23 Cooperative Proxies StartD Proxy A Match Maker Batch Queue Archive Workflow Manager StartD Proxy B StartD Proxy C Job Agent Job Agent Job Agent Discover C C C Hash Map Paths -> Proxies C t1: BC t2: CBA t3: CB t4:

24 Summary Archive –Sources input data, chooses coordinator. Glide-In –Deploy a “task force” of components. Cooperative Proxies –Provide dependable batch read-only data. Data Containers –Fault-isolated pipeline data. Workflow Manager –Directs the operation.

25 Outline Data Intensive Applications –Batch and Pipeline Sharing –Example: AMANDA Hawk: A Migratory File Service –Application Structure –System Architecture –Interactions Evaluation –Performance –Failure Philosophizing

26 Performance Testbed Controlled testbed: –32 550 MHZ dual-cpu cluster machines, 1 GB, SCSI disks, 100Mb/s ethernet. –Simulated WAN: restrict archive storage across router to 800 KB/s. Also some preliminary tests on uncontrolled systems: –MFS over PBS cluster at Los Alamos –MFS over Condor system at INFN Italy.

27 Synthetic Apps a b 10 MB pipe a b 5 MB batch 5 MB pipe a b 10 MB batch Pipe IntensiveMixed Batch Intensive Local Co- Locate Data Don’t Co- Locate Remote System Configurations

28 Pipeline Optimization

29 Everything Together

30 Network Consumption

31 Failure Handling

32 Real Applications BLAST –Search tool for proteins and nucleotides in genomic databases. CMS –Simulation of a high energy physics expt to begin operation at CERN in 2006. H-F –Simulation of the non relativistic interactions between nuclei and electrons AMANDA –Simulation of a neutrino detector buried in the ice of the South Pole.

33 Application Throughput NameStagesRemoteHawk BLAST14.67747.40 CMS233.781273.96 HF340.963187.22 AMANDA4

34 Outline Data Intensive Applications –Batch and Pipeline Sharing –Example: AMANDA Hawk: A Migratory File Service –Application Structure –System Architecture –Interactions Evaluation –Performance –Failure Philosophizing

35 Related Work Workflow management Dependency managers: TREC, make Private namespaces: UFO, db views Cooperative caching: no writes. P2P systems: wrong semantics. Filesystems: overly strong

36 Why Not AFS+Make? Namespaces –Constructed per-process at submit-time Consistency –Enforced at the workflow level Selective Commit –Everything tossed unless explicitly saved. Fault Awareness –CPUs and data can be lost at any point. Practicality –No special permission required.

37 Conclusions Traditional systems build from the bottom up: this disk must have five nines, or we’re in big trouble! MFS builds from the top down: application semantics drive system structure. By posing the right problem, we solve the traditional hard problems of file systems.

38 For More Info... Paper in progress... Application study: –“Pipeline and Batch Sharing in Grid Workloads”, to appear in HPDC-2003. –www.cs.wisc.edu/condor/doc/profiling.ps Talk to us! Questions now?


Download ppt "Using Application Structure to Handle Failures and Improve Performance in a Migratory File Service John Bent, Douglas Thain, Andrea Arpaci-Dusseau, Remzi."

Similar presentations


Ads by Google