Presentation is loading. Please wait.

Presentation is loading. Please wait.

TPC Benchmarks Charles Levine Microsoft Modified by Jim Gray March 1997.

Similar presentations

Presentation on theme: "TPC Benchmarks Charles Levine Microsoft Modified by Jim Gray March 1997."— Presentation transcript:

1 TPC Benchmarks Charles Levine Microsoft Modified by Jim Gray March 1997

2 Outline l Introduction l History of TPC l TPC-A and TPC-B l TPC-C l TPC-D l TPC Futures

3 Benchmarks: What and Why l What is a benchmark? l Domain specific l No single metric possible l The more general the benchmark, the less useful it is for anything in particular. l A benchmark is a distillation of the essential attributes of a workload

4 Benchmarks: What and Why l Desirable attributes Relevant meaningful within the target domain l Understandable Good metric(s) linear, orthogonal, monotonic Scaleable applicable to a broad spectrum of hardware/architecture Coverage does not oversimplify the typical environment Acceptance Vendors and Users embrace it Portable Not limited to one hardware/software vendor/technology

5 Benefits and Liabilities l Good benchmarks l Define the playing field l Accelerate progress l Engineers do a great job once objective is measurable and repeatable l Set the performance agenda l Measure release-to-release progress l Set goals (e.g., 10,000 tpmC, < 100 $/tpmC) l Something managers can understand (!) l Benchmark abuse l Benchmarketing l Benchmark wars l more $ on ads than development

6 Benchmarks have a Lifetime l Good benchmarks drive industry and technology forward. l At some point, all reasonable advances have been made. l Benchmarks can become counter productive by encouraging artificial optimizations. l So, even good benchmarks become obsolete over time.

7 Outline l Introduction l History of TPC l TPC-A and TPC-B l TPC-C l TPC-D l TPC Futures

8 What is the TPC? l TPC = Transaction Processing Performance Council l Founded in Aug/88 by Omri Serlin and 8 vendors. l Membership of for last several years l Everybody whos anybody in software & hardware l De facto industry standards body for OLTP performance l Administered by: Shanley Public Relationsph: (408) N. First St., Suite 600fax: (408) San Jose, CA l Most TPC specs, info, results on web page: l TPC database (unofficial): l News: Omri Serlins FT Systems News (monthly magazine)

9 Two Seminal Events Leading to TPC l Anon, et al, A Measure of Transaction Processing Power, Datamation, April fools day, l Anon, Et Al = Jim Gray (Dr. E. A. Anon) and 24 of his closest friends l Sort: 1M 100 byte records l Mini-batch: copy 1000 records l DebitCredit: simple ATM style transaction l Tandem TopGun Benchmark l DebitCredit l 212 tps on NonStop SQL in 1987 (!) l Audited by Tom Sawyer of Codd and Date (A first) l Full Disclosure of all aspects of tests (A first) l Started the ET1/TP1 Benchmark wars of 87-89

10 1987: 256 tps Benchmark l 14 M$ computer (Tandem) l A dozen people l False floor, 2 rooms of machines Simulate 25,600 clients A 32 node processor array A 40 GB disk array (80 drives) OS expert Network expert DB expert Performance expert Hardware experts Admin expert Auditor Manager

11 1988: DB2 + CICS Mainframe 65 tps l IBM 4391 l Simulated network of 800 clients l 2m$ computer l Staff of 6 to do benchmark 2 x 3725 network controllers 16 GB disk farm 4 x 8 x.5GB Refrigerator-sized CPU

12 1997: 10 years later 1 Person and 1 box = 1250 tps l 1 Breadbox ~ 5x 1987 machine room l 23 GB is hand-held l One person does all the work l Cost/tps is 1,000x less 25 micro dollars per transaction 4x200 Mhz cpu 1/2 GB DRAM 12 x 4GB disk Hardware expert OS expert Net expert DB expert App expert 3 x7 x 4GB disk arrays

13 What Happened? l Moores law: Things get 4x better every 3 years (applies to computers, storage, and networks) l New Economics: Commodity classprice/mips software $/mips k$/year mainframe 10, minicomputer microcomputer 10 1 l GUI: Human - computer tradeoff optimize for people, not computers mainframe mini micro time price

14 TPC Milestones l 1989: TPC-A ~ industry standard for Debit Credit l 1990: TPC-B ~ database only version of TPC-A l 1992: TPC-C ~ more representative, balanced OLTP l 1994: TPC requires all results must be audited l 1995: TPC-D ~ complex decision support (query) l 1995: TPC-A/B declared obsolete by TPC l Non-starters: l TPC-E ~ Enterprise for the mainframers l TPC-S ~ Server component of TPC-C l Both failed during final approval in 1996

15 TPC vs. SPEC l SPEC (System Performance Evaluation Cooperative) l SPECMarks l SPEC ships code l Unix centric l CPU centric l TPC ships specifications l Ecumenical l Database/System/TP centric l Price/Performance l The TPC and SPEC happily coexist l There is plenty of room for both

16 Outline l Introduction l History of TPC l TPC-A and TPC-B l TPC-C l TPC-D l TPC Futures

17 TPC-A Overview l Transaction is simple bank account debit/credit l Database scales with throughput l Transaction submitted from terminal Read 100 bytes including Aid, Tid, Bid, Delta from terminal (see Clause 1.3) BEGIN TRANSACTION Update Account where Account_ID = Aid: Read Account_Balance from Account Set Account_Balance = Account_Balance + Delta Write Account_Balance to Account Write to History: Aid, Tid, Bid, Delta, Time_stamp Update Teller where Teller_ID = Tid: Set Teller_Balance = Teller_Balance + Delta Write Teller_Balance to Teller Update Branch where Branch_ID = Bid: Set Branch_Balance = Branch_Balance + Delta Write Branch_Balance to Branch COMMIT TRANSACTION Write 200 bytes including Aid, Tid, Bid, Delta, Account_Balance to terminal TPC-A Transaction

18 TPC-A Database Schema Legend Table Name one-to-manyrelationship BranchB AccountB*100K 100K HistoryB*2.6M TellerB* Terminals per Branch row 10 second cycle time per terminal 1 transaction/second per Branch row

19 TPC-A Transaction l Workload is vertically aligned with Branch l Makes scaling easy l But not very realistic l 15% of accounts non-local l Produces cross database activity l Whats good about TPC-A? l Easy to understand l Easy to measured l Stresses high transaction rate, lots of physical IO l Whats bad about TPC-A? l Too simplistic! Lends itself to unrealistic optimizations

20 TPC-A Design Rationale l Branch & Teller l in cache, hotspot on branch l Account too big to cache requires disk access l History l sequential insert l hotspot at end l 90-day capacity ensures reasonable ratio of disk to cpu

21 RTE SUT l RTE - Remote Terminal Emulator l Emulates real user behavior l Submits txns to SUT, measures RT l Transaction rate includes think time l Many, many users (10 x tpsA) l SUT - System Under Test l All components except for terminal l Model of system: T T T - C Network* C L I E N T C - S Network* SUT RTE Response Time Measured Here Host System(s) S - S Network* S E R V E R

22 TPC-A Metric l tpsA = transactions per second, l average rate over 15+ minute interval, l at which 90% of txns get <= 2 second RT

23 TPC-A Price l Price l 5 year Cost of Ownership: l hardware, l software, l maintenance l Does not include development, comm lines, operators, power, cooling, etc. Strict pricing model one of TPCs big contributions l List prices l System must be orderable & commercially available l Committed ship date

24 Differences between TPC-A and TPC-B l TPC-B is database only portion of TPC-A l No terminals l No think times l TPC-B reduces history capacity to 30 days l Less disk in priced configuration l TPC-B was easier to configure and run, BUT l Even though TPC-B was more popular with vendors, it did not have much credibility with customers.

25 TPC Loopholes l Pricing l Package pricing l Price does not include cost of five star wizards needed to get optimal performance, so performance is not what a customer could get. l Client/Server l Offload presentation services to cheap clients, but report performance of server l Benchmark specials l Discrete transactions l Custom transaction monitors l Hand coded presentation services

26 TPC-A/B Legacy l First results in 1990: 38.2 tpsA, 29.2K$/tpsA (HP) l Last results in 1994: 3700 tpsA, 4.8 K$/tpsA (DEC) l WOW! 100x on performance & 6x on price in 5 years !! l TPC cut its teeth on TPC-A/B; became functioning, representative body l Learned a lot of lessons: l If benchmark is not meaningful, it doesnt matter how many numbers or how easy to run (TPC-B). l How to resolve ambiguities in spec l How to police compliance l Rules of engagement

27 TPC-A Established OLTP Playing Field l TPC-A criticized for being irrelevant, unrepresentative, misleading l But, truth is that TPC-A drove performance, drove price/performance, and forced everyone to clean up their products to be competitive. l Trend forced industry toward one price/performance, regardless of size. l Became means to achieve legitimacy in OLTP for some.

28 Outline l Introduction l History of TPC l TPC-A and TPC-B l TPC-C l TPC-D l TPC Futures

29 TPC-C Overview l Moderately complex OLTP l The result of 2+ years of development by the TPC l Application models a wholesale supplier managing orders. l Order-entry provides a conceptual model for the benchmark; underlying components are typical of any OLTP system. l Workload consists of five transaction types. l Users and database scale linearly with throughput. l Spec defines full-screen end-user interface. l Metrics are new-order txn rate (tpmC) and price/performance ($/tpmC) l Specification was approved July 23, 1992.

30 TPC-Cs Five Transactions l OLTP transactions: l New-order: enter a new order from a customer l Payment: update customer balance to reflect a payment l Delivery: deliver orders (done as a batch transaction) l Order-status: retrieve status of customers most recent order l Stock-level: monitor warehouse inventory l Transactions operate against a database of nine tables. l Transactions do update, insert, delete, and abort; primary and secondary key access. l Response time requirement: l 90% of each type of transaction must have a response time 5 seconds, except (queued mini-batch) stock-level which is 20 seconds.

31 TPC-C Database Schema WarehouseW Legend Table Name one-to-manyrelationship secondary index DistrictW*10 10 CustomerW*30K 3K HistoryW*30K+ 1+ Item 100K (fixed) StockW*100K 100K W OrderW*30K+ 1+ Order-LineW*300K New-OrderW*5K 0-1

32 2 TPC-C Workflow 1 Select txn from menu: 1. New-Order 45% 2. Payment 43% 3. Order-Status4% 4. Delivery 4% 5. Stock-Level 4% Input screen Output screen Measure menu Response Time Measure txn Response Time Keying time Think time 3 Go back to 1 Cycle Time Decomposition (typical values, in seconds, for weighted average txn) for weighted average txn) Menu = 0.3 Keying = 9.6 Txn RT = 2.1 Think = 11.4 Average cycle time = 23.4

33 Data Skew l NURand - Non Uniform Random l NURand(A,x,y) = (((random(0,A) | random(x,y)) + C) % (y-x+1)) + x l Customer Last Name: NURand(255, 0, 999) l Customer ID: NURand(1023, 1, 3000) l Item ID: NURand(8191, 1, ) l bitwise OR of two random values l skews distribution toward values with more bits on l 75% chance that a given bit is one (1 - ½ * ½) l data skew repeats with period A (first param of NURand())

34 NURand Distribution

35 ACID Tests l TPC-C requires transactions be ACID. l Tests included to demonstrate ACID properties met. l Atomicity l Verify that all changes within a transaction commit or abort. l Consistency l Isolation l ANSI Repeatable reads for all but Stock-Level transactions. l Committed reads for Stock-Level. l Durability l Must demonstrate recovery from l Loss of power l Loss of memory l Loss of media (e.g., disk crash)

36 Transparency l TPC-C requires that all data partitioning be fully transparent to the application code. (See TPC-C Clause 1.6) l Both horizontal and vertical partitioning is allowed l All partitioning must be hidden from the application l Most DB do single-node horizontal partitioning. l Much harder: multiple-node transparency. l For example, in a two-node cluster: Any DML operation must be able to operate against the entire database, regardless of physical location. Warehouses: Node A select * from warehouse where W_ID = 150 Node B select * from warehouse where W_ID = 77

37 Transparency (cont.) l How does transparency affect TPC-C? l Payment txn: 15% of Customer table records are non-local to the home warehouse. l New-order txn: 1% of Stock table records are non-local to the home warehouse. l In a cluster, l cross warehouse traffic cross node traffic l 2 phase commit, distributed lock management, or both. l For example, with distributed txns: Number of nodes% Network Txns n 10.9

38 TPC-C Rules of Thumb l 1.2 tpmC per User/terminal (maximum) l 10 terminals per warehouse (fixed) l MB/tpmC priced disk capacity (minimum) l ~ 0.5 physical IOs/sec/tpmC (typical) l KB main memory/tpmC l So use rules of thumb to size 10,000 tpmC system: l How many terminals? l How many warehouses? l How much memory? l How much disk capacity? l How many spindles?

39 Response Time measured here Typical TPC-C Configuration (Conceptual) DatabaseServer... Client C/S LAN Term. LAN Presentation Services Database Functions Emulated User Load Hardware RTE, e.g.: EmpowerpreVueLoadRunner Software TPC-C application + Txn Monitor and/or database RPC library e.g., Tuxedo, ODBC TPC-C application (stored procedures) + Database engine + Txn Monitor e.g., SQL Server, Tuxedo Driver System

40 Competitive TPC-C Configuration Today l 7,128 tpmC; $89/tpmC; 5-yr COO= 569 K$ l 2 GB memory, 85x9-GB disks (733 GB total) l 6500 users

41 Demo of SQL Server + Web interface l User interface implemented w/ Web browser via HTML l Client to Server via ODBC l SQL Server database engine l All in one nifty little box!

42 TPC-C Current Results l Best Performance is 30,390 $305/tpmC (Oracle/DEC) l Best Price/Perf. is 6,712 $65/tpmC ( MS SQL/DEC/Intel) l graphs show l high price of UNIX l diseconomy of UNIX scaleup

43 Compare SMP Performance

44 TPC-C Summary l Balanced, representative OLTP mix l Five transaction types l Database intensive; substantial IO and cache load l Scaleable workload l Complex data: data attributes, size, skew l Requires Transparency and ACID l Full screen presentation services l De facto standard for OLTP performance

45 Outline l Introduction l History of TPC l TPC-A and TPC-B l TPC-C l TPC-D l TPC Futures

46 TPC-D Overview l Complex Decision Support workload l The result of 5 years of development by the TPC l Benchmark models ad hoc queries l extract database with concurrent updates l multi-user environment l Workload consists of 17 queries and 2 update streams l SQL as written in spec l Database load time must be reported l Database is quantized into fixed sizes l Metrics are Power (QppD), Throughput (QthD), and Price/Performance ($/QphD) l Specification was approved April 5, 1995.

47 TPC-D Schema CustomerSF*150K LineItemSF*6000K OrderSF*1500KSupplierSF*10K Nation25Region5 PartSuppSF*800K PartSF*200K Time 2557 Legend: Arrows point in the direction of one-to-many relationships. Arrows point in the direction of one-to-many relationships. The value below each table name is its cardinality. SF is the Scale Factor. The value below each table name is its cardinality. SF is the Scale Factor. The Time table is optional. So far, not used by anyone. The Time table is optional. So far, not used by anyone.

48 TPC-D Database Scaling and Load l Database size is determined from fixed Scale Factors (SF): l 1, 10, 30, 100, 300, 1000, 3000 (note that 3 is missing, not a typo) l These correspond to the nominal database size in GB. (I.e., SF 10 is approx. 10 GB, not including indexes and temp tables.) l Indices and temporary tables can significantly increase the total disk capacity. (3-5x is typical) l Database is generated by DBGEN l DBGEN is a C program which is part of the TPC-D spec. l Use of DBGEN is strongly recommended. l TPC-D database contents must be exact. l Database Load time must be reported l Includes time to create indexes and update statistics. l Not included in primary metrics.

49 TPC-D Query Set l 17 queries written in SQL92 to implement business questions. l Queries are pseudo ad hoc: l Substitution parameters are replaced with constants by QGEN l QGEN replaces substitution parameters with random values l No host variables l No static SQL l Queries cannot be modified -- SQL as written l There are some minor exceptions. l All variants must be approved in advance by the TPC

50 TPC-D Update Streams l Update 0.1% of data per query stream l About as long as a medium sized TPC-D query l Implementation of updates is left to sponsor, except: l ACID properties must be maintained l Update Function 1 (UF1) l Insert new rows into ORDER and LINEITEM tables equal to 0.1% of table size l Update Function 2 (UF2) l Delete rows from ORDER and LINEITEM tables equal to 0.1% of table size

51 TPC-D Execution l Power Test l Queries submitted in a single stream (i.e., no concurrency) l Sequence: l Throughput Test l Multiple concurrent query streams l Single update stream l Sequence: Cache Flush Flush Query Set 0 (optional) UF1 Query UF2 Timed Sequence Warm-up, untimed Query Set 1 Query Set 2 Query Set N UF1 UF2 UF1 UF2 UF1 UF2 Updates:...

52 TPC-D Metrics l Power Metric (QppD) l Geometric queries per hour times SF l Throughput (QthD) l Linear queries per hour times SF

53 TPC-D Metrics (cont.) l Composite Query-Per-Hour Rating (QphD) l The Power and Throughput metrics are combined to get the composite queries per hour. l Reported metrics are: l Power: l Throughput: l Price/Performance: l Comparability: l Results within a size category (SF) are comparable. l Comparisons among different size databases are strongly discouraged.

54 TPC-D Current Results

55 Example TPC-D Results

56 Want to learn more about TPC-D? l TPC-D Training Video l Six hour video by the folks who wrote the spec. l Explains, in detail, all major aspects of the benchmark. l Available from the TPC: Shanley Public Relationsph: (408) N. First St., Suite 600fax: (408) San Jose, CA

57 Outline l Introduction l History of TPC l TPC-A and TPC-B l TPC-C l TPC-D l TPC Futures

58 TPC Future Direction l TPC-Web l The TPC is just starting a Web benchmark effort. l TPCs focus will be on database and transaction characteristics. l The interesting components are: Web Server Web Server Appl. File System DBMS Server SQL Engine Stored Procs Data base TCP/IP Browser...

59 Rules of Thumb l Answer Set for TPC-C rules of Thumb (slide 38) a 10 ktpmC system » 8340 terminals ( = 5000 / 1.2) » 834 warehouses ( = 8340 / 10) » 3GB-7GB DRAM of memory (10,000 * [3KB..7KB]) » 650 GB disk space = 10,000 * 65 » # Spindles depends on MB capacity vs. physical IO. Capacity: 650 / 4GB = 162 spindles IO: 10,000*.5 / 140 = 31 IO/sec (OK!) but 9GB or 23GB disks would be TOO HOT!

60 Reference Material l Jim Gray, The Benchmark Handbook for Database and Transaction Processing Systems, Morgan Kaufmann, San Mateo, CA, l Raj Jain, The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling, John Wiley & Sons, New York, l William Highleyman, Performance Analysis of Transaction Processing Systems, Prentice Hall, Englewood Cliffs, NJ, l TPC Web site: l Microsoft db site: l IDEAS web site:

Download ppt "TPC Benchmarks Charles Levine Microsoft Modified by Jim Gray March 1997."

Similar presentations

Ads by Google