Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scaleable Computing Jim Gray Microsoft Corporation

Similar presentations


Presentation on theme: "Scaleable Computing Jim Gray Microsoft Corporation"— Presentation transcript:

1 Scaleable Computing Jim Gray Microsoft Corporation Gray@Microsoft.com

2 Thesis: Scaleable Servers
Commodity hardware allows new applications New applications need huge servers Clients and servers are built of the same “stuff” Commodity software and Commodity hardware Servers should be able to Scale up (grow node by adding CPUs, disks, networks) Scale out (grow by adding nodes) Scale down (can start small) Key software technologies Objects, Transactions, Clusters, Parallelism

3 1987: 256 tps Benchmark 14 M$ computer (Tandem) A dozen people
False floor, 2 rooms of machines Admin expert Hardware experts A 32 node processor array Auditor Network expert Simulate 25,600 clients Manager Performance expert OS expert DB expert A 40 GB disk array (80 drives)

4 1988: DB2 + CICS Mainframe 65 tps
IBM 4391 Simulated network of 800 clients 2m$ computer Staff of 6 to do benchmark 2 x 3725 network controllers Refrigerator-sized CPU 16 GB disk farm 4 x 8 x .5GB

5 1997: 10 years later 1 Person and 1 box = 1250 tps
1 Breadbox ~ 5x 1987 machine room 23 GB is hand-held One person does all the work Cost/tps is 1,000x less 25 micro dollars per transaction 4x200 Mhz cpu 1/2 GB DRAM 12 x 4GB disk Hardware expert OS expert Net expert DB expert App expert 3 x7 x 4GB disk arrays

6 What Happened? Moore’s law: Things get 4x better every 3 years (applies to computers, storage, and networks) New Economics: Commodity class price/mips software $/mips k$/year mainframe , minicomputer microcomputer GUI: Human - computer tradeoff optimize for people, not computers mainframe mini micro time price

7 What Happens Next ? Last 10 years: 1000x improvement
1985 2005 1995 performance ? Last 10 years: x improvement Next 10 years: ???? Today: text and image servers are free 25 m$/hit => advertising pays for them Future: video, audio, … servers are free “You ain’t seen nothing yet!”

8 Kinds Of Information Processing
Point-to-point Broadcast Lecture Concert Conversation Money Network Immediate Time- shifted Mail Book Newspaper Database It’s ALL going electronic Immediate is being stored for analysis (so ALL database) Analysis and automatic processing are being added

9 Why Put Everything In Cyberspace?
Point-to-point OR broadcast Low rent - min $/byte Shrinks time - now or later Shrinks space - here or there Automate processing - knowbots Network Immediate OR time-delayed Locate Process Analyze Summarize Database

10 Magnetic Storage Cheaper Than Paper
File cabinet: cabinet (four drawer) 250$ paper (24,000 sheets) 250$ space 10$/ft2) 180$ total 700$ 3¢/sheet Disk: disk (4 GB =) 800$ ASCII: 2 mil pages ¢/sheet (80x cheaper) Image: 200,000 pages ¢/sheet (8x cheaper) Store everything on disk

11 Databases Information at Your Fingertips™ Information Network™ Knowledge Navigator™
All information will be in an online database (somewhere) You might record everything you Read: 10MB/day, 400 GB/lifetime (eight tapes today) Hear: 400MB/day, 16 TB/lifetime (three tapes/year today) See: 1MB/s, 40GB/day, 1.6 PB/lifetime (maybe someday)

12 Database Store ALL Data Types
The old world: Millions of objects 100-byte objects The new world: Billions of objects Big objects (1 MB) Objects have behavior (methods) People Name Address Mike Won David NY Berk Austin Paperless office Library of Congress online All information online Entertainment Publishing Business WWW and Internet People Name Address Papers Picture Voice David NY Mike Berk Won Austin

13 Billions Of Clients Every device will be “intelligent”
Doors, rooms, cars… Computing will be ubiquitous

14 Billions Of Clients Need Millions Of Servers
All clients networked to servers May be nomadic or on-demand Fast clients want faster servers Servers provide Shared Data Control Coordination Communication Clients Mobile clients Fixed clients Servers Server Super server

15 Thesis Many little beat few big
$1 million 1 MM 3 $100 K $10 K Pico Processor Micro Nano 1 MB 10 pico-second ram Mainframe Mini 10 microsecond ram 10 millisecond disc 10 second tape archive 10 nano-second ram 10 MB 1 0 GB 1 TB 1 00 TB 2.5" 1.8" 3.5" 5.25" 1 M SPECmarks, 1TFLOP 106 clocks to bulk ram Event-horizon on chip VM reincarnated Multiprogram cache, On-Chip SMP 9" 14" Smoking, hairy golf ball How to connect the many little parts? How to program the many little parts? Fault tolerance?

16 Future Super Server: 4T Machine
Array of 1,000 4B machines 1 bps processors 1 BB DRAM 10 BB disks 1 Bbps comm lines 1 TB tape robot A few megabucks Challenge: Manageability Programmability Security Availability Scaleability Affordability As easy as a single system CPU 50 GB Disc 5 GB RAM Cyber Brick a 4B machine Future servers are CLUSTERS of processors, discs Distributed database techniques make clusters work

17 Performance = Storage Accesses not Instructions Executed
In the “old days” we counted instructions and IO’s Now we count memory references Processors wait most of the time Where the time goes: clock ticks used by AlphaSort Components Disc Wait Sort 70 MIPS “real” apps have worse Icache misses so run at 60 MIPS if well tuned, 20 MIPS if not Sort OS Disc Wait Memory Wait I-Cache Miss B-Cache D-Cache Data Miss Miss

18 Storage Latency: How Far Away is the Data?
Andromeda 9 10 Tape /Optical 2,000 Years Robot 6 Pluto 10 Disk 2 Years Clock Ticks Sacramento 1.5 hr 100 Memory This Campus 10 On Board Cache 10 min 2 On Chip Cache This Room 1 Registers My Head 1 min

19 The Hardware Is In Place… And then a miracle occurs
? SNAP: scaleable network and platforms Commodity-distributed OS built on: Commodity platforms Commodity network interconnect Enables parallel applications

20 Thesis: Scaleable Servers
Commodity hardware allows new applications New applications need huge servers Clients and servers are built of the same “stuff” Commodity software and Commodity hardware Servers should be able to Scale up (grow node by adding CPUs, disks, networks) Scale out (grow by adding nodes) Scale down (can start small) Key software technologies Objects, Transactions, Clusters, Parallelism

21 Scaleable Servers BOTH SMP And Cluster
Grow up with SMP; 4xP6 is now standard Grow out with cluster Cluster has inexpensive parts SMP super server Departmental server Personal system Cluster of PCs

22 SMPs Have Advantages Single system image easier to manage, easier to program threads in shared memory, disk, Net 4x SMP is commodity Software capable of 16x Problems: >4 not commodity Scale-down problem (starter systems expensive) There is a BIGGEST one SMP super server Departmental server Personal system

23 Building the Largest Node
There is a biggest node (size grows over time) Today, with NT, it is probably 1TB We are building it (with help from DEC and SPIN2) 1 TB GeoSpatial SQL Server database (1.4 TB of disks = 320 drives). 30K BTU, 8 KVA, 1.5 metric tons. Will put it on the Web as a demo app. 10 meter image of the ENTIRE PLANET. 2 meter image of interesting parts (2% of land) One pixel per meter = 500 TB uncompressed. Better resolution in US (courtesy of USGS). Support files 1-TB SQL Server DB Satellite and aerial photos Todo loo da loo-rah, ta da ta-la la la 1-TB home page TM

24 What’s TeraByte? 1 Terabyte: Library of Congress (in ASCII) is 25 TB
1,000,000,000 business letters 150 miles of book shelf 100,000,000 book pages miles of book shelf 50,000,000 FAX images miles of book shelf 10,000,000 TV pictures (mpeg) days of video 4,000 LandSat images earth images (100m) 100,000,000 web page copies of the web HTML Library of Congress (in ASCII) is 25 TB 1980: $200 million of disc ,000 discs $5 million of tape silo ,000 tapes 1997: $200 k$ of magnetic disc discs $30 k$ nearline tape tapes Terror Byte !

25 TB DB User Interface Next

26 Tpc-C Web-Based Benchmarks
Client is a Web browser (7,500 of them!) Submits Order Invoice Query to server via Web page interface Web server translates to DB SQL does DB work Net: easy to implement performance is GREAT! HTTP IIS = Web ODBC SQL

27 TPC-C Shows How Far SMPs have come
Performance is amazing: 2,000 users is the min! 30,000 users on a 4x12 alpha cluster (Oracle) Peak Performance: 30,390 $305/tpmC (Oracle/DEC) Best Price/Perf: 6,712 $65/tpmC (MS SQL/DEC/Intel) graphs show UNIX high price & diseconomy of scaleup

28 TPC C SMP Performance SMPs do offer speedup
but 4x P6 is better than some 18x MIPSco

29 The TPC-C Revolution Shows How Far NT and SQL Server have Come
Economy of scale on Windows NT Recent Microsoft SQL Server benchmarks are Web-based tpmC and $/tpmC MS SQL Server: Economy of Scale & Low Price $250 DB2 $200 Informix $150 Price $/TPM-C Better Microsoft $100 Oracle $50 Sybase $0 1000 2000 3000 4000 5000 6000 7000 8000 Performance tpmC

30 What Happens To Prices? No expensive UNIX front end (20$/tpmC)
No expensive TP monitor software (10$/tpmC) => 65$/tpmC

31 1 billion transactions per day
Grow UP and OUT 1 Terabyte DB Cluster: a collection of nodes as easy to program and manage as a single node SMP super server 1 billion transactions per day Departmental server Personal system

32 Clusters Have Advantages
Clients and servers made from the same stuff Inexpensive: Built with commodity components Fault tolerance: Spare modules mask failures Modular growth Grow by adding small modules Unlimited growth: no biggest one

33 Windows NT clusters Key goals:
Easy: to install, manage, program Reliable: better than a single node Scaleable: added parts add power Microsoft & 60 vendors defining NT clusters Almost all big hardware and software vendors involved No special hardware needed - but it may help Enables Commodity fault-tolerance Commodity parallelism (data mining, virtual reality…) Also great for workgroups! Initial: two-node failover Beta testing since December96 SAP, Microsoft, Oracle giving demos. File, print, Internet, mail, DB, other services Easy to manage Each node can be 4x (or more) SMP Next (NT5) “Wolfpack” is modest size cluster About 16 nodes (so 64 to 128 CPUs) No hard limit, algorithms designed to go further

34 SQL Server™ Failover Using “Wolfpack” Windows NT Clusters
Each server “owns” half the database When one fails… The other server takes over the shared disks Recovers the database and serves it Shared SCSI disk strings A B Private disks Clients

35 How Much Is 1 Billion Transactions Per Day?
1 Btpd = 11,574 tps (transactions per second) ~ 700,000 tpm (transactions/minute) AT&T 185 million calls (peak day worldwide) Visa ~20 M tpd 400 M customers 250,000 ATMs worldwide 7 billion transactions / year (card+cheque) in 1994 Millions of transactions per day 1,000. 100. Mtpd 10. 1. 0.1 AT&T 1 Btpd Visa BofA NYSE

36 Billion Transactions per Day Project
Building a 20-node Windows NT Cluster (with help from Intel) > 800 disks All commodity parts Using SQL Server & DTC distributed transactions Each node has 1/20 th of the DB Each node does 1/20 th of the work 15% of the transactions are “distributed”

37 Parallelism The OTHER aspect of clusters
Clusters of machines allow two kinds of parallelism Many little jobs: online transaction processing TPC-A, B, C… A few big jobs: data search and analysis TPC-D, DSS, OLAP Both give automatic parallelism

38 Kinds of Parallel Execution
Any Any Sequential Sequential Pipeline Program Program Partition outputs split N ways inputs merge M ways Any Any Sequential Sequential Program Program Jim Gray & Gordon Bell: VLDB 95 Parallel Database Systems Survey 81

39 Data Rivers Split + Merge Streams
N X M Data Streams M Consumers N producers River Producers add records to the river, Consumers consume records from the river Purely sequential programming. River does flow control and buffering does partition and merge of data records River = Split/Merge in Gamma = Exchange operator in Volcano. Jim Gray & Gordon Bell: VLDB 95 Parallel Database Systems Survey 82

40 Partitioned Execution
Spreads computation and IO among processors Partitioned data gives NATURAL parallelism Jim Gray & Gordon Bell: VLDB 95 Parallel Database Systems Survey 83

41 N x M way Parallelism N inputs, M outputs, no bottlenecks.
Partitioned Data Partitioned and Pipelined Data Flows Jim Gray & Gordon Bell: VLDB 95 Parallel Database Systems Survey 84

42 The Parallel Law Of Computing
Grosch's Law: Parallel Law: Needs: Linear speedup and linear scale-up Not always possible 2x $ is 4x performance 1 MIPS 1 $ 1,000 MIPS 32 $ .03$/MIPS 2x $ is 2x performance 1,000 MIPS 1,000 $ 1 MIPS 1 $

43 Thesis: Scaleable Servers
Commodity hardware allows new applications New applications need huge servers Clients and servers are built of the same “stuff” Commodity software and Commodity hardware Servers should be able to Scale up (grow node by adding CPUs, disks, networks) Scale out (grow by adding nodes) Scale down (can start small) Key software technologies Objects, Transactions, Clusters, Parallelism

44 The BIG Picture Components and transactions
Software modules are objects Object Request Broker (a.k.a., Transaction Processing Monitor) connects objects (clients to servers) Standard interfaces allow software plug-ins Transaction ties execution of a “job” into an atomic unit: all-or-nothing, durable, isolated Object Request Broker

45 ActiveX and COM COM is Microsoft model, engine inside OLE ALL Microsoft software is based on COM (ActiveX) CORBA + OpenDoc is equivalent Heated debate over which is best Both share same key goals: Encapsulation: hide implementation Polymorphism: generic operations key to GUI and reuse Versioning: allow upgrades Transparency: local/remote Security: invocation can be remote Shrink-wrap: minimal inheritance Automation: easy COM now managed by the Open Group

46 Linking And Embedding Objects are data modules; transactions are execution modules
Link: pointer to object somewhere else Think URL in Internet Embed: bytes are here Objects may be active; can callback to subscribers

47 Commodity Software Components Inexpensive OS, DBMS…and plug-ins
Recent TPC-C prices Oracle on DEC UNIX: 30.4 k 305$/tpmC Informix on DEC UNIX: k 277$/tpmC DB2 on Solaris: $/tpmC SQL Server on Compaq, Windows NT: $/tpmC (using Web, no TP monitor!) Oracle on Windows NT: $/tpmC Net: “Open” solutions can do even biggest jobs; thousands of online users per “node” of cluster ActiveX, VBX, and Java plug-ins Spreadsheets, GeoQuery, FAX, voice, image libraries, commodity component market

48 Objects Meet Databases The basis for universal data servers, access, & integration
object-oriented (COM oriented) programming interface to data Breaks DBMS into components Anything can be a data source Optimization/navigation “on top of” other data sources A way to componentized a DBMS Makes an RDBMS and O-R DBMS (assumes optimizer understands objects) Database Spreadsheet Photos Mail Map Document DBMS engine

49 The Pattern: Three Tier Computing
Presentation Clients do presentation, gather input Clients do some workflow (Xscript) Clients send high-level requests to ORB (Object Request Broker) ORB dispatches workflows and business objects -- proxies for client, orchestrate flows & queues Server-side workflow scripts call on distributed business objects to execute task workflow Business Objects Database

50 The Three Tiers DCOM (oleDB, ODBC,...) Object server Pool HTTP+ DCOM
Web Client HTML VB or Java Script Engine Virt Machine VBscritpt JavaScrpt VB Java plug-ins Internet ORB HTTP+ DCOM Object server Pool Middleware TP Monitor Web Server... DCOM (oleDB, ODBC,...) Object & Data server. LU6.2 IBM Legacy Gateways

51 Why Did Everyone Go To Three-Tier?
Manageability Business rules must be with data Middleware operations tools Performance (scaleability) Server resources are precious ORB dispatches requests to server pools Technology & Physics Put UI processing near user Put shared data processing near shared data Presentation workflow Business Objects Database

52 Why Put Business Objects at Server?
Customer comes to store with list Gives list to clerk Clerk gets goods, makes invoice Customer pays clerk, gets goods Easy to manage Clerks controls access Encapsulation MOM’s Business Objects DAD’sRaw Data Customer comes to store Takes what he wants Fills out invoice Leaves money for goods Easy to build No clerks

53 What Middleware Does ORB, TP Monitor, Workflow Mgr, Web Server
Registers transaction programs workflow and business objects (DLLs) Pre-allocates server pools Provides server execution environment Dynamically checks authority (request-level security) Does parameter binding Dispatches requests to servers parameter binding load balancing Provides Queues Operator interface

54 Server Side Objects Easy Server-Side Execution
A Server Give simple execution environment Object gets start invoke shutdown Everything else is automatic Drag & Drop Business Objects Network Receiver Queue Management Connections Context Security Configuration Thread Pool Service logic Synchronization Shared Data

55 A new programming paradigm
Develop object on the desktop Better yet: download them from the Net Script work flows as method invocations All on desktop Then, move work flows and objects to server(s) Gives desktop development three-tier deployment Software Cyberbricks

56 Transactions Coordinate Components (ACID)
Transaction properties Atomic: all or nothing Consistent: old and new values Isolated: automatic locking or versioning Durable: once committed, effects survive Transactions are built into modern OSs MVS/TM Tandem TMF, VMS DEC-DTM, NT-DTC

57 Transactions & Objects
Application requests transaction identifier (XID) XID flows with method invocations Object Managers join (enlist) in transaction Distributed Transaction Manager coordinates commit/abort

58 Transactions Coordinate Components (ACID)
Programmer’s view: bracket a collection of actions A simple failure model Only two outcomes: Begin() action Commit() Begin() action Rollback() Begin() action Rollback() Fail ! Success! Failure!

59 Distributed Transactions Enable Huge Throughput
Each node capable of 7 KtmpC (7,000 active users!) Can add nodes to cluster (to support 100,000 users) Transactions coordinate nodes ORB / TP monitor spreads work among nodes

60 Distributed Transactions Enable Huge DBs
Distributed database technology spreads data among nodes Transaction processing technology manages nodes

61 Thesis: Scaleable Servers
Scaleable Servers Built from Cyberbricks Allow new applications Servers should be able to Scale up, out, down Key software technologies Clusters (ties the hardware together) Parallelism: (uses the independent cpus, stores, wires Objects (software CyberBricks) Transactions: masks errors.

62 Computer Industry Laws (Rules of thumb)
Metcalf’s law Moore’s first law Bell’s computer classes (7 price tiers) Bell’s platform evolution Bell’s platform economics Bill’s law Software economics Grove’s law Moore’s second law Is info-demand infinite? The death of Grosch’s law

63 Metcalf’s Law Network Utility = Users2
How many connections can it make? 1 user: no utility 100,000 users: a few contacts 1 million users: many on Net 1 billion users: everyone on Net That is why the Internet is so “hot” Exponential benefit

64 Moore’s First Law XXX doubles every 18 months 60% increase per year
Micro processor speeds Chip density Magnetic disk density Communications bandwidth WAN bandwidth approaching LANs Exponential growth: The past does not matter 10x here, 10x there, soon you’re talking REAL change PC costs decline faster than any other platform Volume and learning curves PCs will be the building bricks of all future systems 128KB 128MB 2000 8KB 1MB 8MB 1GB 1970 1980 1990 1M 16M bits: 1K 4K 16K 64K 256K 4M 64M 256M 1 chip memory size ( 2 MB to 32 MB)

65 Bumps In The Moore’s Law Road
1 100 10000 1970 1980 1990 2000 $/MB of DRAM DRAM: 1988: United States anti-dumping rules : ?price flat Magnetic disk: : 10x/decade : 4x/3year! X/decade .01 1 100 10,000 1970 1980 1990 2000 $/MB of DISK

66 Gordon Bell’s 1975 VAX Planning Model... He Didn’t Believe It!
System Price = 5 x 3 x .04 x memory size/ 1.26 (t-1972) K$ 5x: Memory is 20% of cost 3x: DEC markup .04x: $ per byte He didn’t believe: the projection $500 machine He couldn’t comprehend the implications 0.01K$ 0.1K$ 1.K$ 10.K$ 100.K$ 1,000.K$ 10,000.K$ 100,000.K$ 1960 1970 1980 1990 2000 16 KB 64 KB 256 KB 1 MB 8 MB

67 Gordon Bell’s Processing Memories, And Comm 100 Years
1947 1967 1987 2007 2027 2047 Processing Pri. Mem Sec. Mem. POTS(bps) Backbone

68 Gordon Bell’s Seven Price Tiers
10$: wrist watch computers 100$: pocket/ palm computers 1,000$: portable computers 10,000$: personal computers (desktop) 100,000$: departmental computers (closet) 1,000,000$: site computers (glass house) 10,000,000$: regional computers (glass castle) Super server: costs more than $100,000 “Mainframe”: costs more than $1 million Must be an array of processors, disks, tapes, comm ports

69 Bell’s Evolution Of Computer Classes
Technology enables two evolutionary paths: 1. constant performance, decreasing cost 2. constant price, increasing performance ?? Time Mainframes (central) Minis (dep’t.) PCs (personals) Log price WSs 1.26 = 2x/3 yrs x/decade; 1/1.26 = .8 1.6 = 4x/3 yrs --100x/decade; 1/1.6 = .62

70 Gordon Bell’s Platform Economics
Traditional computers: custom or semi-custom, high-tech and high-touch New computers: high-tech and no-touch 100000 10000 Price (K$) 1000 Volume (K) 100 Application price 10 1 0.1 0.01 Mainframe WS Browser Computer type

71 Software Economics An engineer costs about $150,000/year
Microsoft: $9 billion An engineer costs about $150,000/year R&D gets [5%…15%] of budget Need [$3 million… $1 million] revenue per engineer Profit 24% R&D 16% Tax 13% SG&A 34% Product and Service 13% Intel: $16 billion IBM: $72 billion Oracle: $3 billion Profit 6% Profit 15% R&D 8% R&D 9% Profit R&D 8% Tax 5% 22% Tax 7% SG&A 11% SG&A 22% Tax SG&A 12% P&S 47% P&S 59% P&S 26% 43%

72 Software Economics: Bill’s Law
Fixed_ Cost Price Marginal _Cost = + Units Bill Joy’s law (Sun): don’t write software for less than 100,000 platforms @$10 million engineering expense, $1,000 price Bill Gate’s law: don’t write software for less than 1,000,000 platforms @$10 engineering expense, $100 price Examples: UNIX versus Windows NT: $3,500 versus $500 Oracle versus SQL-Server: $100,000 versus $6,000 No spreadsheet or presentation pack on UNIX/VMS/... Commoditization of base software and hardware

73 Grove’s Law The New Computer Industry
Horizontal integration is new structure Each layer picks best from lower layer Desktop (C/S) market 1991: 50% 1995: 75% Function Example Operation AT&T Integration EDS Applications SAP Middleware Oracle Baseware Microsoft Systems Compaq Silicon & Oxide Intel & Seagate

74 Moore’s Second Law The cost of fab lines doubles every generation (three years) Money limit hard to imagine: $10-billion line $20-billion line $40-billion line Physical limit Quantum effects at 0.25 micron now 0.05 micron seems hard 12 years, three generations Lithograph: need Xray below 0.13 micron $1 $10 $100 $1,000 $10,000 1960 1970 1980 1990 2000 Year $million/ Fab Line

75 Constant Dollars Versus Constant Work
One SuperServer can do all the world’s computations Constant dollars: The world spends 10% on information processing Computers are moving from 5% penetration to 50% $300 billion to $3 trillion We have the patent on the byte and algorithm

76 Crossing The Chasm New market Old market Old technology New technology
Product finds customers No product no customers Hard Very hard Old market Hard Boring competitive slow growth Customers find product Old technology New technology


Download ppt "Scaleable Computing Jim Gray Microsoft Corporation"

Similar presentations


Ads by Google