Presentation is loading. Please wait.

Presentation is loading. Please wait.

Enabling Data Intensive Science with Tactical Storage Systems Prof. Douglas Thain University of Notre Dame

Similar presentations


Presentation on theme: "Enabling Data Intensive Science with Tactical Storage Systems Prof. Douglas Thain University of Notre Dame"— Presentation transcript:

1 Enabling Data Intensive Science with Tactical Storage Systems Prof. Douglas Thain University of Notre Dame http://www.cse.nd.edu/~dthain

2 The Cooperative Computing Lab Our model of computer science research: –Understand how users with complex, large-scale applications need to interact with computing systems. –Design novel computing systems that can be applied by many different users == basic CS research. –Deploy code in real systems with real users, suffer real bugs, and learn real lessons == applied CS. Application Areas: –Astronomy, Bioinformatics, Biometrics, Molecular Dynamics, Physics, Game Theory,... ??? External Support: NSF, IBM, Sun http://www.cse.nd.edu/~ccl

3 Two Talks in One Paper at Supercomputing Applications of Tactical

4 Abstract Users of distributed systems encounter many practical barriers between their jobs and the data they wish to access. Problem: Users have access to many resources (disks), but are stuck with the abstractions (cluster NFS) provided by administrators. Solution: Tactical Storage Systems allow any user to create, reconfigure, and tear down abstractions without bugging the administrator.

5 Transparent Distributed Filesystem shared disk The Standard Model

6 Transparent Distributed Filesystem shared disk Transparent Distributed Filesystem shared disk private disk private disk private disk private disk FTP, SCP, RSYNC, HTTP,...

7 Problems with the Standard Model Users encounter partitions in the WAN. –Easy to access data inside cluster, hard outside. –Must use different mechanisms on diff links. –Difficult to combine resources together. Different access modes for different purposes. –File transfer: preparing system for intended use. –File system: access to data for running jobs. Resources go unused. –Disks on each node of a cluster. –Unorganized resources in a department/lab. A global file system can’t satisfy everyone!

8 What if... Users could easily access any storage? I could borrow an unused disk for NFS? An entire cluster can be used as storage? Multiple clusters could be combined? I could reconfigure structures without root? –(Or bugging the administrator daily.) Solution: Tactical Storage System (TSS)

9 Outline Problems with the Standard Model Tactical Storage Systems –File Servers, Catalogs, Abstractions, Adapters Applications: –Remote Database Access for BaBar Code –Remote Dynamic Linking for CDF Code –Logical Data Access for Bioinformatics Code –Expandable Database for MD Simulation Improving the OS for Grid Computing

10 Tactical Storage Systems (TSS) A TSS allows any node to serve as a file server or as a file system client. All components can be deployed without special privileges – but with security. Users can build up complex structures. –Filesystems, databases, caches,... Two Independent Concepts: –Resources – The raw storage to be used. –Abstractions – The organization of storage.

11 file transfer file system file system file system file system file system file system file system Central Filesystem App Distributed Database Abstraction Adapter App Distributed Filesystem Abstraction Adapter App Cluster administrator controls policy on all storage in cluster UNIX Workstations owners control policy on each machine. file server file server file server file server file server file server file server UNIX ??? Adapter 3PT

12 Components of a TSS: 1 – File Servers 2 – Catalogs 3 – Abstractions 4 – Adapters

13 1 – File Servers Unix-Like Interface –open/close/read/write –getfile/putfile to stream whole files –opendir/stat/rename/unlink Complete Independence –choose friends –limit bandwidth/space –evict users? Trivial to Deploy –run server + setacl –no privilege required –can be thrown into a grid system Flexible Access Control file server A file server B Chirp Protocol file system owner of server A owner of server B

14 Related Work Lots of file services for the Grid: –GridFTP, NeST, SRB, RFIO, SRM, IBP,... –(Adapter interfaces with many of these!) Why have another file server? –Reason 1: Must have precise Unix semantics! Apps distinguish ENOENT vs EACCES vs EISDIR. FTP always returns error 550, regardless of error. –Reason 2: TSS focused on easy deployment. No privilege required, no config files, no rebuilding, flexible access control,...

15 Access Control in File Servers Unix Security is not Sufficient –No global user database possible/desirable. –Mapping external credentials to Unix gets messy. Instead, Make External Names First-Class –Perform access control on remote, not local, names. –Types: Globus, Kerberos, Unix, Hostname, Address Each directory has an ACL: globus:/O=NotreDame/CN=DThain RWLA kerberos:dthain@nd.edu RWL hostname:*.cs.nd.edu RL address:192.168.1.* RWLA

16 Problem: Shared Namespace file server globus:/O=NotreDame/* RWLAX a.out test.ctest.dat cms.exe

17 Solution: Reservation (V) Right file server O=NotreDame/CN=* V(RWLA) /O=NotreDame/CN=Monk RWLA mkdir a.outtest.c /O=NotreDame/CN=Monk mkdir /O=NotreDame/CN=Ted RWLA a.outtest.c /O=NotreDame/CN=Ted mkdir only!

18 2 - Catalogs catalog server catalog server periodic UDP updates HTTP XML, TXT, ClassAds

19 3 - Abstractions An abstraction is an organizational layer built on top of one or more file servers. End Users choose what abstractions to employ. Working Examples: –CFS: Central File System –DSFS: Distributed Shared File System –DSDB: Distributed Shared Database Others Possible? –Distributed Backup System –Striped File System (RAID/Zebra)

20 CFS: Central File System file server adapter appl file CFS

21 ptr DSFS: Dist. Shared File System file server appl file server file server file adapter DSFS lookup file location access data pointers to multiple copies

22 DSDB: Dist. Shared Database adapter appl file server file server file database server file index query direct access insert create file DSDB

23 system calls trapped via ptrace tcsh catvi tcsh catvi file table process table Like an OS Kernel –Tracks procs, files, etc. –Adds new capabilities. –Enforces owner’s policies. Delegated Syscalls –Trapped via ptrace interface. –Action taken by Parrot. –Resources chrgd to Parrot. User Chooses Abstr. –Appears as a filesystem. –Option: Timeout tolerance. –Option: Cons. semantics. –Option: Servers to use. –Option: Auth mechanisms. 4 - Adapter Adapter - Parrot Abstractions: CFS – DSFS - DSDB HTTP, FTP, RFIO, NeST, SRB, gLite ???

24 file transfer file system file system file system file system file system file system file system Central Filesystem App Distributed Database Abstraction Adapter App Distributed Filesystem Abstraction Adapter App Cluster administrator controls policy on all storage in cluster UNIX Workstations owners control policy on each machine. file server file server file server file server file server file server file server UNIX ??? Adapter

25 Performance Summary Nothing comes for free! –System calls: order of magnitude slower. –Memory bandwidth overhead: extra copies. However: –TSS can take full advantage of bandwidth (!NFS) –TSS can drive network/switch to limits. –Typical slowdown on real apps: 5-10 percent. –Allows one to harness resources that would go unused. –Observation: Most users constrained by functionality.

26 Outline Problems with the Standard Model Tactical Storage Systems –File Servers, Catalogs, Abstractions, Adapters Applications: –Remote Database Access for BaBar Code –Remote Dynamic Linking for CDF Code –Logical Data Access for Bioinformatics Code –Expandable Database for MD Simulation Improving the OS for Grid Computing

27 Remote Database Access script Parrot TSS file server file system DB data libdb.so sim.exe WAN CFS HEP Simulation Needs Direct DB Access –App linked against Objectivity DB. –Objectivity accesses filesystem directly. –How to distribute application securely? Solution: Remote Root Mount via TSS: parrot –M /=/chirp/fileserver/rootdir parrot –M /=/chirp/fileserver/rootdir DB code can read/write/lock files directly. DB code can read/write/lock files directly. GSI Auth GSI Credit: Sander Klous @ NIKHEF

28 Remote Application Loading appl Parrot HTTP server file system liba.so libb.so libc.so Credit: Igor Sfiligoi @ Fermi National Lab HTTP Modular Simulation Needs Many Libraries –Devel. on workstations, then ported to grid. –Selection of library depends on analysis tech. –Constraint: Must use HTTP for file access. Solution: Dynamic Link with TSS+HTTP: –/home/cdfsoft -> /http/dcaf.fnal.gov/cdfsoft select several MB from 60 GB of libraries proxy

29 Technical Problem HTTP is not a filesystem! (No directories) –Advantages: Firewalls, caches, admins. Appl Parrot HTTP Module HTTP Server root etchomebin alicecmsbabar opendir(/home) GET /home HTTP/1.0

30 Technical Problem Solution: Turn the directories into files. –Can be cached in ordinary proxies! Appl Parrot HTTP Module HTTP Server root etchomebin alicecmsbabar opendir(/home) GET /home/.dir HTTP/1.0.dir make httpfs alice babar cms

31

32

33

34

35 Logical Access to Bio Data Many databases of biological data in different formats around the world: –Archives: Swiss-Prot, TreMBL, NCBI, etc... –Replicas: Public, Shared, Private, ??? Users and applications want to refer to data objects by logical name, not location! –Access the nearest copy of the non-redundant protein database, don’t care where it is. Solution: EGEE data management system maps logical names (LFNs) to physical names (SFNs). Credit: Christophe Blanchet, Bioinformatics Center of Lyon, CNRS IBCP, France http://gbio.ibcp.fr/cblanchet, Christophe.Blanchet@ibcp.fr

36 Logical Access to Bio Data BLAST Parrot RFIOgLiteHTTPFTP Chirp Server FTP Server gLite Server EGEE File Location Service Run BLAST on LFN://ncbi.gov/nr.data open(LFN://ncbi.gov/nr.data) Where is LFN://ncbi.gov/nr.data? Find it at: SFN://ibcp.fr/nr.data nr.data RETR nr.data open(SFN://ibcp.fr/nr.data)

37 Appl: Distributed MD Database State of Molecular Dynamics Research: –Easy to run lots of simulations! –Difficult to understand the “big picture” –Hard to systematically share results and ask questions. Desired Questions and Activities: –“What parameters have I explored?” –“How can I share results with friends?” –“Replicate these items five times for safety.” –“Recompute everything that relied on this machine.” GEMS: Grid Enabled Molecular Sims –Distributed database for MD siml at Notre Dame. –XML database for indexing, TSS for storage/policy.

38 GEMS Distributed Database database server catalog server catalog server XML ->host1:fileA host7:fileB host3:fileC ACB YZX XML ->host6:fileX host2:fileY host5:fileZ data XML+ Temp>300K Mol==CH 4 Credit: Jesus Izaguirre and Aaron Striegel, Notre Dame CSE Dept. host5:fileZ host6:fileX DSFS Adapter

39 Active Recovery in GEMS

40 GEMS and Tactical Storage Dynamic System Configuration –Add/remove servers, discovered via catalog Policy Control in File Servers –Groups can Collaborate within Constraints –Security Implemented within File Servers Direct Access via Adapters –Unmodified Simulations can use Database –Alternate Web/Viz Interfaces for Users.

41 Outline Problems with the Standard Model Tactical Storage Systems –File Servers, Catalogs, Abstractions, Adapters Applications: –Remote Database Access for BaBar Code –Remote Dynamic Linking for CDF Code –Logical Data Access for Bioinformatics Code –Expandable Database for MD Simulation Improving the OS for Grid Computing

42 OS Support for Grid Computing Distributed computing in general suffers because of limitations in the operating system. How can we improve the OS in the long term? Resource allocation: –Cannot reserve space -> jobs crash –Hard to clean up procs -> unreliable systems Security and permissions: –No ACLs -> hard to share data –Only root can setuid -> hard to secure services.

43 job23 Allocation in the Filesystem root jobslogs inputoutput ftp ftp.log coredump

44 100 GB allocation job23 Allocation in the Filesystem root jobslogs inputoutput dalloc 200 GB allocation ftpftp.log

45 student root alice httpd visitor kerberos bob visitor anon1anon2 These two users are completely different: root:kerberos:alice:visitor root:kerberos:bob:visitor The web server can create distinct anonymous accounts. No need for global nobody. kerberos given to the login server. alice created by krb5 login. student created at run-time.

46 Approach by Degrees What can we do as an ordinary user? –Simulate OS functionality within Parrot. –Drawback: Performance / Assurance. What can we do as root? –Setuid toolkit to manage system on request. –Drawback: Limitations in Policy / Expr. What can we do by modifying the OS? –Modify kernel/FS to support to new features. –Drawback: Deployment.

47 Tactical Storage Systems Separate Abstractions from Resources Components: –Servers, catalogs, abstractions, adapters. –Completely user level. –Performance acceptable for real applications. Independent but Cooperating Components –Owners of file servers set policy. –Users must work within policies. –Within policies, users are free to build.

48 Parting Thought Many users of the grid are constrained by functionality, not performance. TSS allows end users to build the structures that they need for the moment without involving an admin. Analogy: building blocks for distributed storage. for distributed storage.

49 Acknowledgments Science Collaborators: –Christophe Blanchet –Sander Klous –Peter Kunzst –Erwin Laure –John Poirer –Igor Sfiligoi CS Collaborators: –Jesus Izaguirre –Aaron Striegel CS Students: –Paul Brenner –James Fitzgerald –Jeff Hemmes –Paul Madrid –Chris Moretti –Phil Snowberger –Justin Wozniak

50 For more information... Cooperative Computing Lab Cooperative Computing Lab http://www.cse.nd.edu/~ccl Cooperative Computing Tools Cooperative Computing Tools http://www.cctools.org Douglas Thain Douglas Thain –dthain@cse.nd.edu dthain@cse.nd.edu –http://www.cse.nd.edu/~dthain http://www.cse.nd.edu/~dthain

51 Performance – System Calls

52 Performance - Applications parrot only

53 Performance – I/O Calls

54 Performance – Bandwidth

55 Performance – DSFS

56 SP5 Performance on EDG Testbed Setup Time to Init Time/Event Unix 446 +/- 46 446 +/- 4664s LAN/NFS 4464 +/- 172 113s LAN/TSS 4505 +/- 155 113s WAN/TSS 6275 +/- 330 88s

57

58

59


Download ppt "Enabling Data Intensive Science with Tactical Storage Systems Prof. Douglas Thain University of Notre Dame"

Similar presentations


Ads by Google