Presentation is loading. Please wait.

Presentation is loading. Please wait.

What I have been Doing Peta Bumps 10k$ TB Scaleable Computing Sloan Digital Sky Survey.

Similar presentations


Presentation on theme: "What I have been Doing Peta Bumps 10k$ TB Scaleable Computing Sloan Digital Sky Survey."— Presentation transcript:

1 What I have been Doing Peta Bumps 10k$ TB Scaleable Computing Sloan Digital Sky Survey

2 300 MBps OC48 = G2 Or memcpy() 90 MBps PCI Sense of scale How fat is your pipe? Fattest pipe on MS campus is the WAN! 20 MBps disk / ATM / OC3 94 MBps Coast to Coast

3 Redmond/Seattle, WA San Francisco, CA New York Arlington, VA 5626 km 10 hops Information Sciences Institute MicrosoftQwest University of Washington Pacific Northwest Gigapop HSCC (high speed connectivity consortium) DARPA

4 The Path DC -> SEA C:\tracert -d 131.107.151.194 Tracing route to 131.107.151.194 over a maximum of 30 hops 0 ------- DELL 4400 Win2K WKS Arlington Virginia, ISIAlteon GbE 1 16 ms <10 ms <10 ms 140.173.170.65 ------- Juniper M40 GbE Arlington Virginia, ISI Interface ISIe 2 <10 ms <10 ms <10 ms 205.171.40.61 ------- Cisco GSR OC48 Arlington Virginia, Qwest DC Edge 3 <10 ms <10 ms <10 ms 205.171.24.85 ------- Cisco GSR OC48 Arlington Virginia, Qwest DC Core 4 <10 ms <10 ms 16 ms 205.171.5.233 ------- Cisco GSR OC48 New York, New York, Qwest NYC Core 5 62 ms 63 ms 62 ms 205.171.5.115 ------- Cisco GSR OC48 San Francisco, CA, Qwest SF Core 6 78 ms 78 ms 78 ms 205.171.5.108 ------- Cisco GSR OC48 Seattle, Washington, Qwest Sea Core 7 78 ms 78 ms 94 ms 205.171.26.42 ------- Juniper M40 OC48 Seattle, Washington, Qwest Sea Edge 8 78 ms 79 ms 78 ms 208.46.239.90 ------- Juniper M40 OC48 Seattle, Washington, PNW Gigapop 9 78 ms 78 ms 94 ms 198.48.91.30 ------- Cisco GSR OC48 Redmond Washington, Microsoft 10 78 ms 78 ms 94 ms 131.107.151.194 ------- Compaq SP750 Win2K WKS Redmond Washington, Microsoft SysKonnect GbE

5 750mbps over 5000 km ( 957 mbps multi-stream ) ~ 4e15 bit meters per second 4 Peta bmps (peta bumps) Single Stream tcp/ip throughput Information Sciences Institute Microsoft Qwest University of Washington Pacific Northwest Gigapop HSCC (high speed connectivity consortium) DARPA 5 Peta bmps multi-stream

6 PetaBumps 751 mbps for 300 seconds = (~28 GB) single-thread single-stream tcp/ip desktop-to-desktop out of the box performance* 5626 km x 751Mbps = ~ 4.2e15 bit meter / second ~ 4.2 Peta bmps Multi-steam is 952 mbps ~5.2 Peta bmps 4470 byte MTUs were enabled on all routers. 20 MB window size

7

8 Pointers The single-stream submission: http://research.microsoft.com/~gray/papers/ Windows2000_I2_land_Speed_Contest_Entry_(Single_Stream_mail).htm The multi-stream submission: http://research.Microsoft.com/~gray/papers/ Windows2000_I2_land_Speed_Contest_Entry_(Multi_Stream_mail).htm The code: http://research.Microsoft.com/~gray/papers/speedy.htm speedy.h speedy.c And a PowerPoint presentation about it. http://research.Microsoft.com/~gray/papers/ Windows2000_WAN_Speed_Record.ppt

9 What I have been Doing Peta Bumps 10k$ TB Scaleable Computing Sloan Digital Sky Survey

10 TPC-C high performance clusters Standard transaction processing benchmark Mix of 5 simple transaction types. Database scales with workload Measures balanced system.

11 Scalability Successes Single Site Clusters –Billions of transactions per day –Tera-Ops & Peta-Bytes (10 k node clusters) –Micro-dollar/transaction Hardware + Software advances –TPC & Sort examples (2x/year) –Many other examples

12 Progress since Jan 99: Running out of gas? 50% better peak perf (not 2x) 2x better Price/Performance At a cost ceiling Systems cost 7M$-13M$ June 98 result: hero effort (off-scale good!) (Compaq/Alpha/Oracle 96 cpu, 8node cluster, 102,542 tpmC @139$/tpmC, 5/5/98) Outa gas?

13 First proof point of commoditized scale-out 1.7x Better Performance 3x Better price/performance 4M$ vs 7M$-13M$ Much more to do, but… great start! 2/17/00: back on Schedule!! Back on Schedule!

14 Year 2000 Sort Results DaytonaIndy Penny 4.5 GB (45 m records) 886 seconds on a $1010 Win2K/Intel system HMsort: doc (74KB), pdf (32KB).doc (74KB),pdf (32KB). Brad Helmkamp, Keith McCready, Stenograph LLCBrad HelmkampKeith McCready Stenograph LLC 4.5 GB (45 m records) 886 seconds on a $1010 Win2K/Intel system HMsort: doc (74KB), pdf (32KB).doc (74KB),pdf (32KB). Brad Helmkamp, Keith McCready, Stenograph LLCBrad HelmkampKeith McCready Stenograph LLC Minute 7.6 GB in 60 seconds Ordinal Nsort SGI 32 cpu Origin 7.6 GB in 60 seconds Ordinal Nsort SGI 32 cpu Origin IRIX IRIX 21.8 GB 218 M records in 56.51 sec NOW+HPVMsort 64 nodes WinNT pdf (170KB). Luis Rivera, Xianan Zhang, Andrew Chien UCSDpdf (170KB). Luis Rivera Andrew Chien TeraByte 49 minutes Daivd Cossock49 minutes Daivd Cossock, Sam Fineberg, Pankaj Mehra, John Peck 68x2 Compaq Tandem Sandia LabsSam FinebergPankaj MehraJohn Peck 1057 seconds 1057 seconds SPsort 1952 SP cluster 2168 disks Jim Wyllie PDF SPsort.pdf (80KB) Jim WyllieSPsort.pdf (80KB) Datamatio n 1 M records in.998 Seconds ( doc 703KB) or (pdf 50KB) doc 703KBpdf 50KB Mitsubishi DIAPRISM Hardware Sorter with HP 4 x 550MHz Xeon PC server + 32 SCSI disks, Windows NT4 Shinsuke AzumaShinsuke Azuma, Takao Sakuma, Tetsuya Takeo, Takaaki Ando, Kenji Shirai Mitsubishi Electric Corp. Datamation

15 System Bus PCI Bus Whats a Balanced System?

16 Rules of Thumb in Data Engineering Moores law -> an address bit per 18 months. Storage grows 100x/decade (except 1000x last decade!) Disk data of 10 years ago now fits in RAM (iso-price). Device bandwidth grows 10x/decade – so need parallelism RAM:disk:tape price is 1:10:30 going to 1:10:10 Amdahls speedup law: S/(S+P) Amdahls IO law: bit of IO per instruction/second (tBps/10 top! 50,000 disks/10 teraOP: 100 M$ Dollars) Amdahls memory law: byte per instruction/second (going to 10) (1 TB RAM per TOP: 1 TeraDollars) PetaOps anyone? Gilders law: aggregate bandwidth doubles every 8 months. 5 Minute rule: cache disk data that is reused in 5 minutes. Web rule: cache everything! http://research.Microsoft.com/~gray/papers/ MS_TR_99_100_Rules_of_Thumb_in_Data_Engineering.doc

17 Cheap Storage Disks are getting cheap: 7 k$/TB disks (25 40 GB disks @ 230$ each)

18 Cheap Storage or Balanced System Low cost storage (2 x 1.5k$ servers) 10K$ TB 2x (1K$ system + 8x70GB disks + 100MbEthernet) Balanced server (9k$/.5 TB) –2x800Mhz (2k$) –256 MB (500$) –8 x 73 GB drives (4K$) –Gbps Ethernet + switch (1.5k$) –18k$ TB, 36K$/RAIDED TB 2x800 Mhz 256 MB

19 160 GB, 2k$ (now) 300 GB by year end. 4x40 GB ID (2 hot plugable) –(1,100$) SCSI-IDE bridge –200k$ Box –500 Mhz cpu –256 MB SRAM –Fan, power, Enet –700$ Or 8 disks/box 600 GB for ~3K$ ( or 300 GB RAID)

20 Hot Swap Drives for Archive or Data Interchange 25 MBps write (so can write N x 74 GB in 3 hours) 74 GB/overnite = ~N x 2 MB/second @ 19.95$/nite

21 Doing Studies of IO bandwidth SCSI & IDE bandwidth –~15-30 MBps sequential –SCSI 10rpm ~ 110 kaps @ 600$ –IDE 7.2krpm ~ 80 kaps@ 250$ Get 2 disks for the price of 1 –More bandwidth for reads –RAID –10K$ raid TB by 2001

22 What I have been Doing Peta Bumps 10k$ TB Scaleable Computing Sloan Digital Sky Survey

23 A project run by the Astrophysical Research Consortium (ARC) Goal: To create a detailed multicolor map of the Northern Sky over 5 years, with a budget of approximately $80M Data Size: 40 TB raw, 1 TB processed Goal: To create a detailed multicolor map of the Northern Sky over 5 years, with a budget of approximately $80M Data Size: 40 TB raw, 1 TB processed The University of Chicago Princeton University The Johns Hopkins University The University of Washington Fermi National Accelerator Laboratory US Naval Observatory The Japanese Participation Group The Institute for Advanced Study SLOAN Foundation, NSF, DOE, NASA The University of Chicago Princeton University The Johns Hopkins University The University of Washington Fermi National Accelerator Laboratory US Naval Observatory The Japanese Participation Group The Institute for Advanced Study SLOAN Foundation, NSF, DOE, NASA The Sloan Digital Sky Survey

24 Scientific Motivation Create the ultimate map of the Universe: The Cosmic Genome Project! Study the distribution of galaxies: What is the origin of fluctuations? What is the topology of the distribution? Measure the global properties of the Universe: How much dark matter is there? Local census of the galaxy population: How did galaxies form? Find the most distant objects in the Universe: What are the highest quasar redshifts?

25 First Light Images Telescope: First light May 9th 1998 Equatorial scans Telescope: First light May 9th 1998 Equatorial scans

26 The First Stripes Camera: 5 color imaging of >100 square degrees Multiple scans across the same fields Photometric limits as expected Camera: 5 color imaging of >100 square degrees Multiple scans across the same fields Photometric limits as expected

27 SDSS Data Flow

28 All raw data saved in a tape vault at Fermilab Object catalog400 GB parameters of >10 8 objects Redshift Catalog 1 GB parameters of 10 6 objects Atlas Images 1.5 TB 5 color cutouts of >10 8 objects Spectra 60 GB in a one-dimensional form Derived Catalogs 20 GB - clusters - QSO absorption lines 4x4 Pixel All-Sky Map 60 GB heavily compressed Object catalog400 GB parameters of >10 8 objects Redshift Catalog 1 GB parameters of 10 6 objects Atlas Images 1.5 TB 5 color cutouts of >10 8 objects Spectra 60 GB in a one-dimensional form Derived Catalogs 20 GB - clusters - QSO absorption lines 4x4 Pixel All-Sky Map 60 GB heavily compressed SDSS Data Products

29 User Interface Analysis Engine Master Objectivity RAID Slave Objectivity RAID Slave Objectivity RAID Slave Objectivity RAID Slave SX Engine Objectivity Federation Distributed Implementation

30 Helping move the data to SQL –Database design –Data loading Experimenting with queries on a 4 M object DB –20 questions like find gravitational lens candidates –Queries use parallelism, most run in a few seconds.(auto parallel) –Some run in hours (neighbors within 1 arcsec) –EASY to ask questions. Helping with an outreach website: SkyServer Personal goal: Try datamining techniques to re-discover Astronomy What We Have Been Doing

31 What I have been Doing Peta Bumps 10k$ TB Scaleable Computing Sloan Digital Sky Survey


Download ppt "What I have been Doing Peta Bumps 10k$ TB Scaleable Computing Sloan Digital Sky Survey."

Similar presentations


Ads by Google