The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Presenter: Chao-Han Tsai (Some slides adapted from the Google’s series lectures)

Slides:



Advertisements
Similar presentations
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung SOSP 2003 Presented by Wenhao Xu University of British Columbia.
Advertisements

Question Scalability vs Elasticity What is the difference?
Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung
The google file system Cs 595 Lecture 9.
THE GOOGLE FILE SYSTEM CS 595 LECTURE 8 3/2/2015.
G O O G L E F I L E S Y S T E M 陳 仕融 黃 振凱 林 佑恩 Z 1.
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google Jaehyun Han 1.
The Google File System Authors : Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung Presentation by: Vijay Kumar Chalasani 1CS5204 – Operating Systems.
GFS: The Google File System Brad Karp UCL Computer Science CS Z03 / th October, 2006.
NFS, AFS, GFS Yunji Zhong. Distributed File Systems Support access to files on remote servers Must support concurrency – Make varying guarantees about.
The Google File System (GFS). Introduction Special Assumptions Consistency Model System Design System Interactions Fault Tolerance (Results)
Google File System 1Arun Sundaram – Operating Systems.
Lecture 6 – Google File System (GFS) CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation.
Cloud Computing Lecture #2 Introduction to MapReduce Jimmy Lin The iSchool University of Maryland Monday, September 8, 2008 This work is licensed under.
Lecture 5 – Distributed Filesystems CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this presentation.
The Google File System. Why? Google has lots of data –Cannot fit in traditional file system –Spans hundreds (thousands) of servers connected to (tens.
The Google File System and Map Reduce. The Team Pat Crane Tyler Flaherty Paul Gibler Aaron Holroyd Katy Levinson Rob Martin Pat McAnneny Konstantin Naryshkin.
1 The File System Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung (Google)
GFS: The Google File System Michael Siegenthaler Cornell Computer Science CS th March 2009.
Large Scale Sharing GFS and PAST Mahesh Balakrishnan.
Cloud Computing Lecture #2 From Lisp to MapReduce and GFS
The Google File System.
Google File System.
Northwestern University 2007 Winter – EECS 443 Advanced Operating Systems The Google File System S. Ghemawat, H. Gobioff and S-T. Leung, The Google File.
Case Study - GFS.
Google Distributed System and Hadoop Lakshmi Thyagarajan.
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google∗
Lecture #8 Giant-Scale Services CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
1 The Google File System Reporter: You-Wei Zhang.
CSC 456 Operating Systems Seminar Presentation (11/13/2012) Leon Weingard, Liang Xin The Google File System.
Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung
The Google File System Ghemawat, Gobioff, Leung via Kris Molendyke CSE498 WWW Search Engines LeHigh University.
Homework 1 Installing the open source cloud Eucalyptus Groups Will need two machines – machine to help with installation and machine on which to install.
The Google File System Presenter: Gladon Almeida Authors: Sanjay Ghemawat Howard Gobioff Shun-Tak Leung Year: OCT’2003 Google File System14/9/2013.
The Google File System Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
CENG334 Introduction to Operating Systems Erol Sahin Dept of Computer Eng. Middle East Technical University Ankara, TURKEY Network File System Except as.
Presenters: Rezan Amiri Sahar Delroshan
GFS : Google File System Ömer Faruk İnce Fatih University - Computer Engineering Cloud Computing
Eduardo Gutarra Velez. Outline Distributed Filesystems Motivation Google Filesystem Architecture The Metadata Consistency Model File Mutation.
GFS. Google r Servers are a mix of commodity machines and machines specifically designed for Google m Not necessarily the fastest m Purchases are based.
HADOOP DISTRIBUTED FILE SYSTEM HDFS Reliability Based on “The Hadoop Distributed File System” K. Shvachko et al., MSST 2010 Michael Tsitrin 26/05/13.
MapReduce Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
EE324 DISTRIBUTED SYSTEMS FALL 2015 Google File System.
Presenter: Seikwon KAIST The Google File System 【 Ghemawat, Gobioff, Leung 】
Eduardo Gutarra Velez. Outline Distributed Filesystems Motivation Google Filesystem Architecture Chunkservers Master Consistency Model File Mutation Garbage.
Google File System Robert Nishihara. What is GFS? Distributed filesystem for large-scale distributed applications.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Lecture 24: GFS.
Google File System Sanjay Ghemwat, Howard Gobioff, Shun-Tak Leung Vijay Reddy Mara Radhika Malladi.
Dr. Zahoor Tanoli COMSATS Attock 1.  Motivation  Assumptions  Architecture  Implementation  Current Status  Measurements  Benefits/Limitations.
1 CMPT 431© A. Fedorova Google File System A real massive distributed file system Hundreds of servers and clients –The largest cluster has >1000 storage.
Sanjay Ghemawat, Howard Gobioff, Shun-Tak Leung
Cloud Computing Platform as a Service The Google Filesystem
Google File System.
The Google File System (GFS)
Google Filesystem Some slides taken from Alan Sussman.
Google File System CSE 454 From paper by Ghemawat, Gobioff & Leung.
The Google File System Sanjay Ghemawat, Howard Gobioff and Shun-Tak Leung Google Presented by Jiamin Huang EECS 582 – W16.
The Google File System (GFS)
The Google File System (GFS)
The Google File System (GFS)
The Google File System (GFS)
CENG334 Introduction to Operating Systems
The Google File System (GFS)
THE GOOGLE FILE SYSTEM.
by Mikael Bjerga & Arne Lange
The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google SOSP’03, October 19–22, 2003, New York, USA Hyeon-Gyu Lee, and Yeong-Jae.
The Google File System (GFS)
Presentation transcript:

The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Presenter: Chao-Han Tsai (Some slides adapted from the Google’s series lectures) EECS 582 – W161

Motivation Google needed a good distributed file system Redundant storage of massive amounts of data on commodity computers (cheap and unreliable) Why not use an existing file system? Google’s problems are different from others in terms of workload and design priorities Google file system is designed for Google applications Google applications are designed for GFS EECS 582 – W162

Assumptions High component failure rates Inexpensive commodity components often fail Modest number of huge files A few million 100 MB or larger files Files are write-once, mostly appended to Large streaming reads High sustained bandwidth is favored over low latency EECS 582 – W163

Design Decisions Files stored as chunks Fixed size (64 MB) Reliability through replication Each chunk is replicated across 3+ chunkservers Single master to coordinate access and keep metadata Simple centralized management No data caching Little benefit due to large datasets, streaming reads Familiar interface but customize the API Snapshot and record append EECS 582 – W164

Architecture EECS 582 – W165

Single Master Problem: Single point of failure Scalability bottleneck GFS solutions Shadow master Minimize master involvement Never move data through master, only used for metadata Large chunk size Master delegates authority to primary replicas in data mutations (chunk leases) EECS 582 – W166

Metadata Metadata is stored on master File and chunk namespaces Mapping from files to chunks Locations of each chunk’s replicas All in memory (64 bytes per chunk) Fast Easily accessible EECS 582 – W167

Metadata Master has an operation log for persistent logging of critical metadata updates Persistent on local disk Replicated Checkpoints for faster recovery EECS 582 – W168

Metadata Master has an operation log for persistent logging of critical metadata updates Persistent on local disk Replicated Checkpoints for faster recovery EECS 582 – W169 XY

Metadata Master has an operation log for persistent logging of critical metadata updates Persistent on local disk Replicated Checkpoints for faster recovery EECS 582 – W1610 X

Metadata Master has an operation log for persistent logging of critical metadata updates Persistent on local disk Replicated Checkpoints for faster recovery EECS 582 – W1611 X START X Y END

Mutations Mutation = write or record append Must be done for all replicas Goal: minimize master involvement Lease mechanism Master picks on replica as primary and gives it a “lease” for mutations Primary defines a serial order of mutations All replicas follows this order Data flow decoupled from control flow EECS 582 – W1612

Atomic Record Append GFS appends it to the file atomically at least once GFS picks the offset Works for concurrent writers Used heavily by Google applications For files that serve as multiple-producer/single consumer queues Merge results from multiple machines to one file EECS 582 – W1613

Master’s Responsibilities Metadata storage Namespace management/locking Heartbeat with chunkservers Give instructions, collect state, track cluster health Chunk creation, re-replication, rebalancing Balance space utilization and access speed Re-replicate data if redundancy is lower than threshold Rebalance data to smooth out storage and request load EECS 582 – W1614

Master’s Responsibilities Garbage collection Simple and reliable in distributed system where failures are common Master logs the deletion, rename the file to a hidden name Lazily garbage collect hidden files (three days?) Stale replica deletion Detect stale replicas using chunk version numbers EECS 582 – W1615

Fault Tolerance High availability Fast recovery Master and chunks server can restart in a few seconds Chunk replication Default is three replicas Shadow masters Data integrity Checksum every 64 KB block in each chunk EECS 582 – W1616

Evaluation EECS 582 – W1617

Conclusion GFS shows how to support large-scale processing workloads on commodity hardware Design to tolerate frequent component failures Optimized for huge files that are mostly appended and read Simple solution (single master) EECS 582 – W1618