Presentation is loading. Please wait.

Presentation is loading. Please wait.

Building PetaByte Servers

Similar presentations


Presentation on theme: "Building PetaByte Servers"— Presentation transcript:

1 Building PetaByte Servers
Jim Gray Microsoft Research Kilo 103 Mega 106 Giga 109 Tera today, we are here Peta 1015 Exa 1018

2 Outline The challenge: Building GIANT data stores Conclusion 1
for example, the EOS/DIS 15 PB system Conclusion 1 Think about MOX and SCANS Conclusion 2: Think about Clusters SMP report Cluster report

3 The Challenge -- EOS/DIS
Antarctica is melting -- 77% of fresh water liberated sea level rises 70 meters Chico & Memphis are beach-front property New York, Washington, SF, LA, London, Paris Let’s study it! Mission to Planet Earth EOS: Earth Observing System (17B$ => 10B$) 50 instruments on 10 satellites Landsat (added later) EOS DIS: Data Information System: 3-5 MB/s raw, MB/s processed. 4 TB/day, 15 PB by year 2007

4 The Process Flow Data arrives and is pre-processed.
instrument data is calibrated, gridded averaged Geophysical data is derived Users ask for stored data OR to analyze and combine data. Can make the pull-push split dynamically Pull Processing Push Processing Other Data

5 Designing EOS/DIS Expect that millions will use the system (online) Three user categories: NASA funded by NASA to do science Global Change 10 k - other dirt bags Internet 20 m - everyone else Grain speculators Environmental Impact Reports New applications => discovery & access must be automatic Allow anyone to set up a peer- node (DAAC & SCF) Design for Ad Hoc queries, Not Standard Data Products If push is 90%, then 10% of data is read (on average). => A failure: no one uses the data, in DSS, push is 1% or less. => computation demand is enormous (pull:push is 100: 1)

6 Obvious Points: EOS/DIS will be a cluster of SMPs
It needs 16 PB storage = 1 M disks in current technology = 500K tapes in current technology It needs 100 TeraOps of processing = 100K processors (current technology) and ~ 100 Terabytes of DRAM 1997 requirements are 1000x smaller smaller data rate almost no re-processing work

7 The architecture 2+N data center design Scaleable OR-DBMS
Emphasize Pull vs Push processing Storage hierarchy Data Pump Just in time acquisition

8 2+N data center design duplex the archive (for fault tolerance)
let anyone build an extract (the +N) Partition data by time and by space (store 2 or 4 ways). Each partition is a free-standing OR-DBBMS (similar to Tandem, Teradata designs). Clients and Partitions interact via standard protocols OLE-DB, DCOM/CORBA, HTTP,…

9 Hardware Architecture
2 Huge Data Centers Each has 50 to 1,000 nodes in a cluster Each node has about 25…250 TB of storage SMP Bips to 50 Bips K$ DRAM 50GB to 1 TB K$ 100 disks TB to 230 TB 200K$ 10 tape robots 25 TB to 250 TB 200K$ 2 Interconnects 1GBps to 100 GBps 20K$ Node costs 500K$ Data Center costs 25M$ (capital cost)

10 Scaleable OR-DBMS Adopt cluster approach (Tandem, Teradata, VMScluster, DB2/PE, Informix,....) System must scale to many processors, disks, links OR DBMS based on standard object model CORBA or DCOM (not vendor specific) Grow by adding components System must be self-managing

11 Storage Hierarchy Cache hot 10% (1.5 PB) on disk.
Keep cold 90% on near-line tape. Remember recent results on speculation 15 PB of Tape Robot 1 PB of Disk 10-TB RAM 500 nodes 10,000 drives 4x1,000 robots

12 Data Pump Some queries require reading ALL the data (for reprocessing)
Some queries require reading ALL the data (for reprocessing) Each Data Center scans the data every 2 weeks. Data rate 10 PB/day = 10 TB/node/day = 120 MB/s Compute on demand small jobs less than 1,000 tape mounts less than 100 M disk accesses less than 100 TeraOps. (less than 30 minute response time) For BIG JOBS scan entire 15PB database Queries (and extracts) “snoop” this data pump.

13 Just-in-time acquisition 30%
Hardware prices decline 20%-40%/year So buy at last moment Buy best product that day: commodity Depreciate over 3 years so that facility is fresh. (after 3 years, cost is 23% of original). 60% decline peaks at 10M$ EOS DIS Disk Storage Size and Cost 1 10 2 3 4 5 assume 40% price decline/year Data Need TB Storage Cost M$ 1994 1996 1998 2000 2002 2004 2006 2008

14 Problems HSM Design and Meta-data Ingest
Data discovery, search, and analysis reorg-reprocess disaster recovery cost

15 Trends: New Applications
The Old World: Millions of objects 100-byte objects Trends: New Applications The New World: Billions of objects Big objects (1MB) Multimedia: Text, voice, image, video, ... The paperless office Library of congress online (on your campus) All information comes electronically entertainment publishing business Information Network, Knowledge Navigator, Information at Your Fingertips

16 What's a Terabyte Terror Byte !! .1% of a PetaByte!!!!!!!!!!!!!!!!!!
1,000,000,000 business letters 100,000,000 book pages 50,000,000 FAX images 10,000,000 TV pictures (mpeg) 4,000 LandSat images Library of Congress (in ASCI) is 25 TB 1980: 200 M$ of disc ,000 discs 5 M$ of tape silo ,000 tapes 1994: M$ of magnetic disc discs 500 K$ of optical disc robot platters 50 K$ of tape silo tapes Terror Byte !! .1% of a PetaByte!!!!!!!!!!!!!!!!!! 150 miles of bookshelf 15 miles of bookshelf 7 miles of bookshelf 10 days of video

17 The Cost of Storage & Access
File Cabinet: cabinet (4 drawer) 250$ paper (24,000 sheets) 250$ space 10$/ft2) 180$ total $ ¢/sheet Disk: disk (9 GB =) ,000$ ASCII: m pages ¢/sheet (100x cheaper) Image: 200 k pages ¢/sheet (similar to paper)

18 Standard Storage Metrics
Capacity: RAM: MB and $/MB: today at 100 MB & 10 $/MB Disk: GB and $/GB: today at 10 GB and 200 $/GB Tape: TB and $/TB: today at .1 TB and 100 k$/TB (nearline) Access time (latency) RAM: 100 ns Disk: ms Tape: second pick, 30 second position Transfer rate RAM: GB/s Disk: MB/s Arrays can go to 1GB/s Tape: MB/s not clear that striping works

19 New Storage Metrics: KOXs, MOXs, GOXs, SCANs?
KOX: How many kilobyte objects served per second the file server, transaction processing metric MOX: How many megabyte objects served per second the Mosaic metric GOX: How many gigabyte objects served per hour the video & EOSDIS metric SCANS: How many scans of all the data per day the data mining and utility metric

20 Summary (of new ideas) Storage accesses are the bottleneck
Accesses are getting larger (MOX, GOX, SCANS) Capacity and cost are improving BUT Latencies and bandwidth are not improving much SO Use parallel access (disk and tape farms)

21 How To Get Lots of MOX, GOX, SCANS
parallelism: use many little devices in parallel Beware of the media myth Beware of the access time myth At 10 MB/s: 1.2 days to scan 1,000 x parallel: 1.5 minute SCAN. 1 Terabyte 1 Terabyte 10 MB/s Parallelism: divide a big problem into many smaller ones to be solved in parallel.

22 Meta-Message: Technology Ratios Are Important
If everything gets faster&cheaper at the same rate then nothing really changes. Some things getting MUCH BETTER: communication speed & cost 1,000x processor speed & cost 100x storage size & cost 100x Some things staying about the same speed of light (more or less constant) people (10x worse) storage speed (only 10x better)

23 Outline The challenge: Building GIANT data stores Conclusion 1
for example, the EOS/DIS 15 PB system Conclusion 1 Think about MOX and SCANS Conclusion 2: Think about Clusters SMP report Cluster report

24 Scaleable Computers BOTH SMP and Cluster
Grow Up with SMP 4xP6 is now standard Grow Out with Cluster Cluster has inexpensive parts SMP Super Server Departmental Cluster of PCs Server Personal System

25 TPC-C Current Results Best Performance is 30,390 $305/tpmC (Oracle/DEC) Best Price/Perf. is 7,693 $43.5/tpmC (MS SQL/Dell) Graphs show UNIX high price UNIX scaleup diseconomy

26 Compare SMP Performance

27 TPC C improved fast 40% hardware, 100% software, 100% PC Technology

28 Where the money goes

29 What does this mean? PC Technology is 3x cheaper than high-end SMPs
PC nodes performance are 1/2 of high-end SMPs 4xP6 vs 20xUltraSparc Peak performance is a cluster Tandem 100 node cluster DEC Alpha 4x8 cluster Commodity solutions WILL come to this market

30 Cluster: Shared What? Shared Memory Multiprocessor Shared Disk Cluster
Multiple processors, one memory all devices are local DEC, SG, Sun Sequent nodes easy to program, not commodity Shared Disk Cluster an array of nodes all shared common disks VAXcluster + Oracle Shared Nothing Cluster each device local to a node ownership may change Tandem, SP2, Wolfpack

31 Clusters being built Teradata 1500 nodes +24 TB disk (50k$/slice)
Tandem,VMScluster 150 nodes (100k$/slice) Intel, 9,000 55M$ ( 6k$/slice) Teradata, Tandem, DEC moving to NT+low slice price IBM: m$ (200k$/slice) PC clusters (bare handed) at dozens of nodes web servers (msn, PointCast,…), DB servers KEY TECHNOLOGY HERE IS THE APPS. Apps distribute data Apps distribute execution

32 Cluster Advantages Clients and Servers made from the same stuff.
Inexpensive: Built with commodity components Fault tolerance: Spare modules mask failures Modular growth grow by adding small modules Parallel data search use multiple processors and disks

33 Clusters are winning the high end
You saw that a 4x8 cluster has best TPC-C performance This year, a 95xUltraSparc cluster won the MinuteSort Speed Trophy (see NOWsort at Ordinal 16x on SGI Origin is close (but the loser!).

34 Clusters (Plumbing) Single system image Fault Tolerance
naming protection/security management/load balance Fault Tolerance Wolfpack Demo Hot Pluggable hardware & Software

35 So, What’s New? When slices cost 50k$, you buy 10 or 20.
Manageability, programmability, usability become key issues (total cost of ownership). PCs are MUCH easier to use and program MPP Vicious Cycle No Customers! New MPP & NewOS App Apps CP/Commodity Virtuous Cycle: Standards allow progress and investment protection Standard OS & Hardware Customers

36 Windows NT Server Clustering High Availability On Standard Hardware
Standard API for clusters on many platforms No special hardware required. Resource Group is unit of failover Typical resources: shared disk, printer, ... IP address, NetName Service (Web,SQL, File, Print Mail,MTS API to define resource groups, dependencies, resources, GUI administrative interface A consortium of 60 HW & SW vendors (everybody who is anybody) 2-Node Cluster in beta test now. Available 97H1 >2 node is next SQL Server and Oracle Demo on it today Key concepts System: a node Cluster: systems working together Resource: hard/ soft-ware module Resource dependency: resource needs another Resource group: fails over as a unit Dependencies: do not cross group boundaries The Wolfpack program has three goals: (1) To be the most reliable way to run Windows NT Server, (2) to be the most cost-effective high-availability platform, and (3) to be the easiest platform for developing cluster-aware solutions. Let’s look at each of those three in more detail. Wolfpack will be the most reliable way to run Windows NT Server. Out of the box, it will provide automatic recovery for file sharing, printer sharing, and Internet/Intranet services. It will be able to provide basic recovery services for virtually any existing server application without coding changes, and will feature an administrator’s console that makes it easy to take a server off-line for maintenance without disrupting your mission-critical business applications. The other server can deliver services while one is being changed. Wolfpack will run on standard servers from many vendors. It can use many interconnects ranging from standard Ethernet to specialized high-speed ones like Tandem ServerNet. It works with a wide range of disk drives and controllers including standard SCSI drives. This broad hardware support means flexibility, choice, and competitive pricing. Wolfpack clustering technology allows all nodes in the cluster to do useful work -- there’s no wasted “standby” server sitting idle waiting for a failure as there is with server mirroring solutions. And, of course, because it’s Windows software, it will have a familiar and easy to use graphical interface for the administrator. SQL Server will use Wolfpack’s Clustering API to provide high-availability via disk and IP address failover. SQL Server continues its close integration with NT and its unmatched ease-of-use. SQL Server 7.0 will provide a GUI configuration and management wizard to make it easy to configure high availability databases.

37 Wolfpack NT Clusters 1.0 B A Two node file and print failover Clients
Private Private Shared SCSI Disk Strings Disks Disks B A etty lice Clients GUI admin interface Wolfpack NT Clusters 1.0 supports clusters containing two nodes, affectionately called Alice and Betty. Alice and Betty have some private devices and some shared SCSI strings. At any instant, each SCSI disk is “owned” by either Alice or Betty. The SCSI II commands are used to implement this ownership. Most modern SCSI controllers and disks correctly implement these commands. Microsoft is qualifying many controller and disk vendors for Wolfpack. In configuring Wolfpack NT Clusters, the operator assigns shared devices to one or another failover Resource Groups. During normal operation one failover group is served by Alice and the other group is served by Betty. In case one node fails, the other node takes ownership of the shared devices in that resource group and starts serving them. When the failed node returns to service, it can resume ownership of the resource group. Resources in the group can be disks, services, IP addresses, SQL databases, and other resources. The Wolfpack API allows any application to declare itself as a resource and participate in a Resource Group The cluster administration interface provides a graphical way to define resource groups and resources. It also provides a way to monitor and control the resource groups.

38 What is Wolfpack? Cluster Service Resource Management Interface
Cluster Management Tools Cluster Api DLL RPC Cluster Service Global Update Database Manager Manager Node Event Processor Manager Failover Mgr Communication App Resource Mgr Manager Resource Other Nodes DLL Open Online IsAlive LooksAlive Offline Close Resource Resource Monitors Management Interface Physical Logical App Non Aware App Resource Resource Resource DLL DLL DLL Cluster Aware App

39 Where We Are Today Clusters moving fast Technology ahead of schedule
OLTP Sort WolfPack Technology ahead of schedule cpus, disks, tapes,wires,.. OR Databases are evolving Parallel DBMSs are evolving HSM still immature

40 Outline The challenge: Building GIANT data stores Conclusion 1
for example, the EOS/DIS 15 PB system Conclusion 1 Think about MOX and SCANS Conclusion 2: Think about Clusters SMP report Cluster report

41 Building PetaByte Servers
Jim Gray Microsoft Research Kilo 103 Mega 106 Giga 109 Tera today, we are here Peta 1015 Exa 1018


Download ppt "Building PetaByte Servers"

Similar presentations


Ads by Google