Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 The Personal Petabyte The Enterprise Exabyte Jim Gray Microsoft Research Presented at IIST Asilomar 10 December 2003

Similar presentations

Presentation on theme: "1 The Personal Petabyte The Enterprise Exabyte Jim Gray Microsoft Research Presented at IIST Asilomar 10 December 2003"— Presentation transcript:

1 1 The Personal Petabyte The Enterprise Exabyte Jim Gray Microsoft Research Presented at IIST Asilomar 10 December 2003

2 2 Outline History Changing Ratios Who Needs a Petabyte? Thesis: in 20 years, Personal Petabyte will be affordable. Most personal bytes will be video. Enterprise Exabytes will be sensor data.

3 3 An Early Disk Phaistos Disk: –1700 BC –Minoan (Cretian, Greek) No one can read it

4 4 Early Magnetic Disk 1956 IBM 305 RAMAC 4 MB 50x24 disks 1200 rpm 100 ms access 35k$/y rent Included computer & accounting software (tubes not transistors)

5 5 10 years later (1966 Illiac) 1.6 meters 30 MB

6 6 Or 1970 IBM 2314 at 29MB 970

7 7 History: 1980 Winchester Seagate 5 ¼ 5 MBFujitsu Eagle MB

8 8 The MAD Future Terror Bytes8 In the beginning there was the Paramagnetic Limit: 10Gbpsi Limit keeps growing (now ~ 200Gbpsi) Mark H. Kryder, Seagate Future Magnetic Recording Technologies FAST PDF. apologizes:Only 100x density improvement, then we are out of ideasPDF Thats 20 TB desktop 4 TB laptop!

9 9 Outline History Changing Ratios –Disk to Ram –DASD is Dead –Disk space is free –Disk Archive-Interchange –Network faster than disk –Capacity, Access –TCO == people cost –Smart disks happened –The entry cost barrier Who Needs a Petabyte?

10 10 Storage Ratios Changed 10x better access time 10x more bandwidth 100x more capacity Data 25x cooler (1Kaps/20MB vs 1Kaps/500MB) 4,000x lower media price 20x to 100x lower disk price Scan takes 10x longer (3 min vs 45 min) RAM/disk media price ratio changed – :1 – :1 – :1 –today ~ 1$/GB disk 200:1 200$/GB dram

11 11 Price_Ram_TB(t+10) = Price_Disk_TB(t) Disk Data Can Move to RAM in 10 years Disk ~100x cheaper than RAM per byte Both get 100x bigger in 10 years. Move data to main memory Seems: RAM/Disk bandwidth ~100:1 100:1 10 years

12 12 DASD (direct access storage device) is Dead accesses got cheaper –Better disks –Cheaper disks! Disk access/bandwidth: the scarce resource 2003: 100 minute Scan 1990: 5 minute Scan Sequential bandwidth 50x faster than random Random Scan 3 days Ratio will get 10x worse in 10 years 100x more capacity, 10x more bandwidth. Invent ways to trade capacity for bandwidth Use the capacity without using bandwidth. 300 GB 50 MB/s

13 13 Disk Space is free Bandwidth & Accesses/sec are not 1k$/TB going to 100$/TB 20 TB disks on the (distant) horizon 100x density, Waste capacity intelligently –Version everything –Never delete anything –Keep many copies Snapshots Mirrors (triple and geoplex) Cooperative caching (Farsite and OceanStore) Disk Archive

14 14 Disk as Archive-Interchange Tape is archive / interchange / low cost Disc now competitive in all 3 categories What format? Fat? CDFS?.. What tools? Need the software to do disk-based backup/restore Commonly snapshot (multi-version FS) Radical: peer-to-peer file archiving –Many researchers looking at this OceanStore, Farsite, others…

15 15 Disk vs Network Now the Network is Faster (!) Old days: –10 MBps disk, low cpu cost ( 0.1 ins/b) – 1 MBps net, huge cpu cost (10 ins/b) New days: –50 MBps disk, low cpu cost –100 MBps net, low cpu cost (toe, rdma) Consequence: –You can remote disks. –Allows consolidation –Aggregate (bisection) bandwidth still a problem.

16 16 Storage TCO == people time 1980 rules-of-thumb: 1 systems programmer per mips 1 data admin per 10GB 800 sys programmers + 4 data admins for your laptop Sometimes it must seem like that but… Today one data admin per 1 TB TB Depending on process and data value. Automate everything Use redundancy to mask (and repair) problems. Save people, spend hardware

17 17 Disk Evolution: Smart Disks System on a chip High-speed LAN Disk is super computer! Kilo Mega Giga Tera Peta Exa Zetta Yotta

18 18 Smart Disks Happened Disk appliances are here: Cameras Games PVRs FileServers Challenge: entry price

19 19 The Entry Cost Barrier Connect the Dots Consumer electronics want low entry cost 1970: 20,000$ 1980: 2,000$ 2000: 200$ $ If magnetics cant do this, another technology will. Think: copiers, hydraulic shovels,… Time ln(price) Wante dToday

20 20 Outline History Changing Ratios Who Needs a Petabyte? –Petabyte for 1k$ in years –Affordable but useless –How much information is there? –The Memex vision –MyLifeBits –The other 20% (enterprise storage) Yotta Zetta Exa Peta Tera Giga Mega Kilo We are here

21 21 A Bleak Future: The ½ Platter Society? Conclusion from Information Storage Industry Consortium HDD Applications Roadmap Workshop : – Most users need only 20GB –We are heading to a ½ platter industry. 80% of units and capacity is personal disks (not enterprise servers). The end of disk capacity demand. A zero billion dollar industry?

22 22 Try to fill a terabyte in a year ItemItems/TBItems/day 300 KB JPEG3 M9,800 1 MB Doc1 M2,900 1 hour 256 kb/s MP3 audio 9 K26 1 hour 1.5 Mbp/s MPEG video Petabyte volume has to be some form of video.

23 23 Growth Comes From NEW Apps The 10M$ computer of 1980 costs 1k$ today If we were still doing the same things, IT would be a 0 B$/y industry NEW things absorb the new capacity 2010 Portable ? –100 Gips processor –1 GB RAM –1 TB disk –1 Gbps network –Many form factors

24 24 The Terror Bytes are Here Yotta Zetta Exa Peta Tera Giga Mega Kilo We are here 1 TB costs 1k$ to buy 1 TB costs 300k$/y to own Management & curation are expensive (I manage about 15TB in my spare time. no, I am not paid 4.5M$/y to manage it) –Searching 1TB takes minutes or hours or days or.. I am Petrified by Peta Bytes But… people can afford them so, we have lots to do – Automate!

25 25 How much information is there? Soon everything can be recorded and indexed Most bytes will never be seen by humans. Data summarization, trend detection anomaly detection are key technologies See Mike Lesk: How much information is there: See Lyman & Varian: How much information Yotta Zetta Exa Peta Tera Giga Mega Kilo A Book.Movi e All books (words) All Books MultiMedia Everything ! Recorded A Photo 24 Yecto, 21 zepto, 18 atto, 15 femto, 12 pico, 9 nano, 6 micro, 3 milli

26 26 Memex As We May Think, Vannevar Bush, 1945 A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility yet if the user inserted 5000 pages of material a day it would take him hundreds of years to fill the repository, so that he can be profligate and enter material freely

27 27 Why Put Everything in Cyberspace? Low rent min $/byte Shrinks time now or later Shrinks space here or there Automate processing knowbots Point-to-Point OR Broadcast Immediate OR Time Delayed Locate Process Analyze Summarize

28 28 How Will We Find Anything? Need Queries, Indexing, Pivoting, Scalability, Backup, Replication, Online update, Set-oriented access If you dont use a DBMS, you will implement one! Simple logical structure: –Blob and link is all that is inherent –Additional properties (facets == extra tables) and methods on those tables (encapsulation) More than a file system Unifies data and meta-data SQL ++ DBMS

29 29 MyLifeBits The guinea pig Gordon Bell is digitizing his life Has now scanned virtually all: –Books written (and read when possible) –Personal documents (correspondence, memos, , bills, legal,0…) –Photos –Posters, paintings, photo of things (artifacts, …medals, plaques) –Home movies and videos –CD collection –And, of course, all PC files Now recording: phone, radio, TV (movies), web pages… conversations Paperless throughout scanned, 12 discarded. Only 30 GB!!! Excluding digital videos Video is 2+ TB and growing fast

30 30 Capture and encoding

31 31 I mean everything

32 32 gbell wag: 67 yr, 25Kday life a Personal Petabyte 1PB

33 33 80% of data is personal / individual. But, what about the other 20%? Business –Wall Mart online: 1PB and growing…. –Paradox: most transaction systems < 1 PB. –Have to go to image/data monitoring for big data Government –Government is the biggest business. Science –LOTS of data.

34 34 Information Avalanche Both –better observational instruments and –Better simulations are producing a data avalanche Examples –Turbulence: 100 TB simulation then mine the Information –BaBar: Grows 1TB/day 2/3 simulation Information 1/3 observational Information –CERN: LHC will generate 1GB/s 10 PB/y –VLBA (NRAO) generates 1GB/s today –NCBI: only ½ TB but doubling each year, very rich dataset. –Pixar: 100 TB/Movie Image courtesy of C. Meneveau & A. JHU

35 35 Q: Where will the Data Come From? A: Sensor Applications Earth Observation –15 PB by 2007 Medical Images & Information + Health Monitoring –Potential 1 GB/patient/y 1 EB/y Video Monitoring –~1E8 video 1E5 MBps 10TB/s 100 EB/y filtered??? Airplane Engines –1 GB sensor data/flight, –100,000 engine hours/day –30PB/y Smart Dust: ?? EB/y shollar/macro_motes/macromotes.html

36 36 CERN Tier 0 Instruments: CERN – LHC Peta Bytes per Year Looking for the Higgs Particle Sensors: 1000 GB/s (1TB/s ~ 30 EB/y) Events 75 GB/s Filtered 5 GB/s Reduced 0.1 GB/s ~ 2 PB/y Data pyramid: 100GB : 1TB : 100TB : 1PB : 10PB

37 37 LHC Requirements (2005- ) 1E9 events 1MB/ev = 1PB/year/expt Reconstructed = 100TB/recon/year/expt Send to Tier1 Regional Centres => 400TB/year to RAL? Keep one set + derivatives on disk …and rest on tape But UK plans a Tier1 clone Many data clones Source: John Gordon IT Department, CLRC/RAL CUF Meeting, October 2000

38 38 Science Data Volume ESO/STECF Science Archive 100 TB archive Similar at Hubble, Keck, SDSS,… ~1PB aggregate

39 39 Data Pipeline: NASA Level 0: raw datadata stream Level 1: calibrated datameasured values Level 1A: calibrated & normalized flux/magnitude/… Level 2: derived data metrics vegetation index Data volume –0 ~ 1 ~ 1A << 2 Level 2 >> level 1 because –MANY data products –Must keep all published –data Editions (versions) EOSDIS Core System Information for Scientists, E1 E2 E3 E4 time Level 1A4 editions of 4 Level 2 products, each is small, but…

40 40 DataGrid Computing Store exabytes twice (for redundancy) Access them from anywhere Implies huge archive/data centers Supercomputer centers become super data centers Examples: Google, Yahoo!, Hotmail, BaBar, CERN, Fermilab, SDSC, …

41 41 Outline History Changing Ratios Who Needs a Petabyte? Thesis: in 20 years, Personal Petabyte will be affordable. Most personal bytes will be video. Enterprise Exabytes will be sensor data.

42 42 Bonus Slides

43 43 SQL x4 SAN SAN TerraServer V4 8 web front end 4x8cpu+4GB DB 18TB triplicate disks Classic SAN (tape not shown) ~2M$ Works GREAT! 2000…2004 Now replaced by.. WEB WEBx8

44 44 KVM / IP TerraServer V5 Storage Bricks –White-box commodity servers –4tb raw / 2TB Raid1 SATA storage –Dual Hyper-threaded Xeon 2.4ghz, 4GB RAM Partitioned Databases (PACS – partitioned array) –3 Storage Bricks = 1 TerraServer data –Data partitioned across 20 databases –More data & partitions coming Low Cost Availability –4 copies of the data RAID1 SATA Mirroring 2 redundant Bunches –Spare brick to repair failed brick 2N+1 design –Web Application bunch aware Load balances between redundant databases Fails over to surviving database on failure ~100K$ capital expense.

45 45 How Do You Move A Terabyte? 14 minutes ,920, OC hours1000Gbps 1 day Mpbs 14 hours ,000155OC3 2 days2, ,00043T3 2 months2, ,2001.5T1 5 months Home DSL 6 years3,0861, Home phone Time/TB $/TB Sent $/Mbps Rent $/month Speed Mbps Context Source: TeraScale Sneakernet, Microsoft Research, Jim Gray et. all

46 46 Key Observations for Personal Store And for Larger Stores. Schematized storage can help organization and search. Schematized XML data sets a universal way exchange data answers and new data. If data are objects, then need standard representation for classes & methods.

47 47 Longhorn - For Knowledge Workers Simple (Self-*): auto install/manage/tune/repair. Schema: data carries semantics Search: find things fast (driven by schema) Sync: desktop state anywhere Security: (Palladium) -- trustworthy - privacy - trustworthy (virus, spam,..) - DRM (protect IP) Shell: task-based UI (aka activity-based UI) Office-Longhorn –Intelligent documents –XML and Schemas

48 48 How Do We Represent It To The Outside World? Schematized Storage File metaphor too primitive: just a blob Table metaphor too primitive: just records Need Metadata describing data context –Format –Providence (author/publisher/ citations/…) –Rights –History –Related documents In a standard format XML and XML schema DataSet is great example of this World is now defining standard schemas schema Data or difgram - … …

49 49 There Is A Problem GREAT!!!! –XML documents are portable objects –XML documents are complex objects –WSDL defines the methods on objects (the class) But will all the implementations match? –Think of UNIX or SQL or C or… This is a work in progress. Niklaus Wirth: Algorithms + Data Structures = Programs

50 50 Disk Storage Cheaper Than Paper File Cabinet (4 drawer) 250$ Cabinet: Paper (24,000 sheets) 250$ Space 10/ft2) 180$ Total 700$ 0.03 $/sheet 3 pennies per page Disk: disk (250 GB =) 250$ ASCII: 100 m pages 2e-6 $/sheet(10,000x cheaper) micro-dollar per page Image: 1 m photos 3e-4 $/photo (100x cheaper) milli-dollar per photo Store everything on disk Note: Disk is 100x to 1000x cheaper than RAM

51 51 Data Analysis Looking for –Needles in haystacks – the Higgs particle –Haystacks: Dark matter, Dark energy Needles are easier than haystacks Global statistics have poor scaling –Correlation functions are N 2, likelihood techniques N 3 As data and computers grow at same rate, we can only keep up with N logN A way out? –Discard notion of optimal (data is fuzzy, answers are approximate) –Dont assume infinite computational resources or memory Requires combination of statistics & computer science

52 52 Analysis and Databases Much statistical analysis deals with –Creating uniform samples – –Data filtering –Assembling relevant subsets –Estimating completeness –Censoring bad data –Counting and building histograms –Generating Monte-Carlo subsets –Likelihood calculations –Hypothesis testing Traditionally these are performed on files Most of these tasks are much better done inside DB Bring Mohamed to the mountain, not the mountain to him

53 53 Data Access is hitting a wall FTP and GREP are not adequate You can GREP 1 MB in a second You can GREP 1 GB in a minute You can GREP 1 TB in 2 days You can GREP 1 PB in 3 years. Oh!, and 1PB ~5,000 disks At some point you need indices to limit search parallel data search and analysis This is where databases can help You can FTP 1 MB in 1 sec You can FTP 1 GB / min (= 1 $/GB) … 2 days and 1K$ … 3 years and 1M$

54 54 Smart Data (active databases) If there is too much data to move around, take the analysis to the data! Do all data manipulations at database –Build custom procedures and functions in the database Automatic parallelism Easy to build-in custom functionality –Databases & Procedures being unified –Example temporal and spatial indexing pixel processing, … Easy to reorganize the data –Multiple views, each optimal for certain types of analyses –Building hierarchical summaries are trivial Scalable to Petabyte datasets

Download ppt "1 The Personal Petabyte The Enterprise Exabyte Jim Gray Microsoft Research Presented at IIST Asilomar 10 December 2003"

Similar presentations

Ads by Google