Presentation on theme: "TPC Benchmarks Charles Levine Microsoft"— Presentation transcript:
1 TPC Benchmarks Charles Levine Microsoft email@example.com Modified by Jim GrayMicrosoft.comMarch 1997
2 Outline Introduction History of TPC TPC-A and TPC-B TPC-C TPC-D TPC Futures
3 Benchmarks: What and Why What is a benchmark?Domain specificNo single metric possibleThe more general the benchmark, the less useful it is for anything in particular.A benchmark is a distillation of the essential attributes of a workload
4 Benchmarks: What and Why Desirable attributesRelevant è meaningful within the target domainUnderstandableGood metric(s) è linear, orthogonal, monotonicScaleable è applicable to a broad spectrum of hardware/architectureCoverage è does not oversimplify the typical environmentAcceptance è Vendors and Users embrace itPortable è Not limited to one hardware/software vendor/technology
5 Benefits and Liabilities Good benchmarksDefine the playing fieldAccelerate progressEngineers do a great job once objective is measurable and repeatableSet the performance agendaMeasure release-to-release progressSet goals (e.g., 10,000 tpmC, < 100 $/tpmC)Something managers can understand (!)Benchmark abuseBenchmarketingBenchmark warsmore $ on ads than development
6 Benchmarks have a Lifetime Good benchmarks drive industry and technology forward.At some point, all reasonable advances have been made.Benchmarks can become counter productive by encouraging artificial optimizations.So, even good benchmarks become obsolete over time.
7 Outline Introduction History of TPC TPC-A and TPC-B TPC-C TPC-D TPC Futures
8 What is the TPC? TPC = Transaction Processing Performance Council Founded in Aug/88 by Omri Serlin and 8 vendors.Membership of for last several yearsEverybody who’s anybody in software & hardwareDe facto industry standards body for OLTP performanceAdministered by: Shanley Public Relations ph: (408) N. First St., Suite 600 fax: (408) San Jose, CAMost TPC specs, info, results on web page:TPC database (unofficial):News: Omri Serlin’s FT Systems News (monthly magazine)
9 Two Seminal Events Leading to TPC Anon, et al, “A Measure of Transaction Processing Power”, Datamation, April fools day, 1985.Anon, Et Al = Jim Gray (Dr. E. A. Anon) and 24 of his closest friendsSort: 1M 100 byte recordsMini-batch: copy 1000 recordsDebitCredit: simple ATM style transactionTandem TopGun BenchmarkDebitCredit212 tps on NonStop SQL in 1987 (!)Audited by Tom Sawyer of Codd and Date (A first)Full Disclosure of all aspects of tests (A first)Started the ET1/TP1 Benchmark wars of ’87-’89
10 1987: 256 tps Benchmark 14 M$ computer (Tandem) A dozen people False floor, 2 rooms of machinesAdmin expertHardware expertsA 32 node processor arrayAuditorNetwork expertSimulate 25,600 clientsManagerPerformance expertOS expertDB expertA 40 GB disk array (80 drives)7
11 1988: DB2 + CICS Mainframe 65 tps IBM 4391Simulated network of 800 clients2m$ computerStaff of 6 to do benchmark2 x 3725network controllersRefrigerator-sizedCPU16 GB disk farm4 x 8 x .5GB8
12 1997: 10 years later 1 Person and 1 box = 1250 tps 1 Breadbox ~ 5x 1987 machine room23 GB is hand-heldOne person does all the workCost/tps is 1,000x less 25 micro dollars per transaction4x200 Mhz cpu1/2 GB DRAM12 x 4GB diskHardware expertOS expertNet expertDB expertApp expert3 x7 x 4GBdisk arrays9
13 What Happened?Moore’s law: Things get 4x better every 3 years (applies to computers, storage, and networks)New Economics: Commodity class price/mips software $/mips k$/year mainframe , minicomputer microcomputerGUI: Human - computer tradeoff optimize for people, not computersmainframeminimicrotimeprice10
14 TPC Milestones 1989: TPC-A ~ industry standard for Debit Credit 1990: TPC-B ~ database only version of TPC-A1992: TPC-C ~ more representative, balanced OLTP1994: TPC requires all results must be audited1995: TPC-D ~ complex decision support (query)1995: TPC-A/B declared obsolete by TPCNon-starters:TPC-E ~ “Enterprise” for the mainframersTPC-S ~ “Server” component of TPC-CBoth failed during final approval in 1996
15 TPC vs. SPEC SPEC (System Performance Evaluation Cooperative) SPECMarksSPEC ships codeUnix centricCPU centricTPC ships specificationsEcumenicalDatabase/System/TP centricPrice/PerformanceThe TPC and SPEC happily coexistThere is plenty of room for both
16 Outline Introduction History of TPC TPC-A and TPC-B TPC-C TPC-D TPC Futures
17 TPC-A Overview Transaction is simple bank account debit/credit Database scales with throughputTransaction submitted from terminalTPC-A TransactionRead 100 bytes including Aid, Tid, Bid, Delta from terminal (see Clause 1.3) BEGIN TRANSACTION Update Account where Account_ID = Aid: Read Account_Balance from Account Set Account_Balance = Account_Balance + DeltaWrite Account_Balance to Account Write to History: Aid, Tid, Bid, Delta, Time_stamp Update Teller where Teller_ID = Tid: Set Teller_Balance = Teller_Balance + Delta Write Teller_Balance to Teller Update Branch where Branch_ID = Bid: Set Branch_Balance = Branch_Balance + Delta Write Branch_Balance to Branch COMMIT TRANSACTION Write 200 bytes including Aid, Tid, Bid, Delta, Account_Balance to terminal
18 TPC-A Database Schema Branch Account History Teller Table Name B B*100K100KHistoryB*2.6MTellerB*101010 Terminals per Branch row10 second cycle time per terminal1 transaction/second per Branch rowLegendTable Name<cardinality>one-to-manyrelationship
19 TPC-A Transaction Workload is vertically aligned with Branch Makes scaling easyBut not very realistic15% of accounts non-localProduces cross database activityWhat’s good about TPC-A?Easy to understandEasy to measuredStresses high transaction rate, lots of physical IOWhat’s bad about TPC-A?Too simplistic! Lends itself to unrealistic optimizations
20 TPC-A Design Rationale Branch & Tellerin cache, hotspot on branchAccounttoo big to cache Þ requires disk accessHistorysequential inserthotspot at end90-day capacity ensures reasonable ratio of disk to cpu
21 RTE Û SUT RTE - Remote Terminal Emulator SUT - System Under Test Emulates real user behaviorSubmits txns to SUT, measures RTTransaction rate includes think timeMany, many users (10 x tpsA)SUT - System Under TestAll components except for terminalModel of system:TT - CNetwork*CLIENC - SSUTRTEResponse Time Measured HereHost System(s)S - SSRV
22 TPC-A Metric tpsA = transactions per second, average rate over 15+ minute interval,at which 90% of txns get <= 2 second RT
23 TPC-A Price Price 5 year Cost of Ownership: hardware,software,maintenanceDoes not include development, comm lines, operators, power, cooling, etc.Strict pricing model Þ one of TPC’s big contributionsList pricesSystem must be orderable & commercially availableCommitted ship date
24 Differences between TPC-A and TPC-B TPC-B is database only portion of TPC-ANo terminalsNo think timesTPC-B reduces history capacity to 30 daysLess disk in priced configurationTPC-B was easier to configure and run, BUTEven though TPC-B was more popular with vendors, it did not have much credibility with customers.
25 TPC Loopholes Pricing Client/Server Benchmark specials Package pricing Price does not include cost of five star wizards needed to get optimal performance, so performance is not what a customer could get.Client/ServerOffload presentation services to cheap clients, but report performance of serverBenchmark specialsDiscrete transactionsCustom transaction monitorsHand coded presentation services
26 TPC-A/B Legacy First results in 1990: 38.2 tpsA, 29.2K$/tpsA (HP) Last results in 1994: 3700 tpsA, 4.8 K$/tpsA (DEC)WOW! 100x on performance & 6x on price in 5 years !!TPC cut its teeth on TPC-A/B; became functioning, representative bodyLearned a lot of lessons:If benchmark is not meaningful, it doesn’t matter how many numbers or how easy to run (TPC-B).How to resolve ambiguities in specHow to police complianceRules of engagement
27 TPC-A Established OLTP Playing Field TPC-A criticized for being irrelevant, unrepresentative, misleadingBut, truth is that TPC-A drove performance, drove price/performance, and forced everyone to clean up their products to be competitive.Trend forced industry toward one price/performance, regardless of size.Became means to achieve legitimacy in OLTP for some.
28 Outline Introduction History of TPC TPC-A and TPC-B TPC-C TPC-D TPC Futures
29 TPC-C Overview Moderately complex OLTP The result of 2+ years of development by the TPCApplication models a wholesale supplier managing orders.Order-entry provides a conceptual model for the benchmark; underlying components are typical of any OLTP system.Workload consists of five transaction types.Users and database scale linearly with throughput.Spec defines full-screen end-user interface.Metrics are new-order txn rate (tpmC) and price/performance ($/tpmC)Specification was approved July 23, 1992.
30 TPC-C’s Five Transactions OLTP transactions:New-order: enter a new order from a customerPayment: update customer balance to reflect a paymentDelivery: deliver orders (done as a batch transaction)Order-status: retrieve status of customer’s most recent orderStock-level: monitor warehouse inventoryTransactions operate against a database of nine tables.Transactions do update, insert, delete, and abort; primary and secondary key access.Response time requirement:90% of each type of transactionmust have a response time £ 5 seconds,except (queued mini-batch) stock-level which is £ 20 seconds.
31 TPC-C Database Schema Warehouse Table Name Stock Item District LegendTable Name<cardinality>one-to-manyrelationshipsecondary indexStockW*100K100KWItem100K (fixed)DistrictW*1010CustomerW*30K3KOrderW*30K+1+New-OrderW*5K0-1HistoryW*30K+1+Order-LineW*300K+10-15
32 TPC-C Workflow 1 Measure menu Response Time Input screen Keying time Select txn from menu:1. New-Order 45%2. Payment 43%3. Order-Status 4%4. Delivery 4%5. Stock-Level 4%Cycle Time Decomposition(typical values, in seconds,for weighted average txn)Menu = 0.3Keying = 9.6Txn RT = 2.1Think = 11.4Average cycle time = 23.42Measure menu Response TimeInput screenKeying time3Measure txn Response TimeOutput screenThink timeGo back to 1
33 Data Skew NURand - Non Uniform Random NURand(A,x,y) = (((random(0,A) | random(x,y)) + C) % (y-x+1)) + xCustomer Last Name: NURand(255, 0, 999)Customer ID: NURand(1023, 1, 3000)Item ID: NURand(8191, 1, )bitwise OR of two random valuesskews distribution toward values with more bits on75% chance that a given bit is one (1 - ½ * ½)data skew repeats with period “A” (first param of NURand())
35 ACID Tests TPC-C requires transactions be ACID. Tests included to demonstrate ACID properties met.AtomicityVerify that all changes within a transaction commit or abort.ConsistencyIsolationANSI Repeatable reads for all but Stock-Level transactions.Committed reads for Stock-Level.DurabilityMust demonstrate recovery fromLoss of powerLoss of memoryLoss of media (e.g., disk crash)
36 TransparencyTPC-C requires that all data partitioning be fully transparent to the application code. (See TPC-C Clause 1.6)Both horizontal and vertical partitioning is allowedAll partitioning must be hidden from the applicationMost DB do single-node horizontal partitioning.Much harder: multiple-node transparency.For example, in a two-node cluster:Warehouses:1-100Node A select * from warehouse where W_ID = 150Node B select * from warehouse where W_ID = 77Any DML operation must beable to operate against theentire database, regardless ofphysical location.
37 Transparency (cont.) How does transparency affect TPC-C? In a cluster, Payment txn: 15% of Customer table records are non-local to the home warehouse.New-order txn: 1% of Stock table records are non-local to the home warehouse.In a cluster,cross warehouse traffic cross node traffic 2 phase commit, distributed lock management, or both.For example, with distributed txns:Number of nodes % Network Txns1 02 5.53 7.3n ® ¥ 10.9
38 TPC-C Rules of Thumb 10 terminals per warehouse (fixed) 1.2 tpmC per User/terminal (maximum)10 terminals per warehouse (fixed)65-70 MB/tpmC priced disk capacity (minimum)~ 0.5 physical IOs/sec/tpmC (typical)KB main memory/tpmCSo use rules of thumb to size 10,000 tpmC system:How many terminals?How many warehouses?How much memory?How much disk capacity?How many spindles?
44 TPC-C Summary Balanced, representative OLTP mix Five transaction typesDatabase intensive; substantial IO and cache loadScaleable workloadComplex data: data attributes, size, skewRequires Transparency and ACIDFull screen presentation servicesDe facto standard for OLTP performance
45 Outline Introduction History of TPC TPC-A and TPC-B TPC-C TPC-D TPC Futures
46 TPC-D Overview Complex Decision Support workload The result of 5 years of development by the TPCBenchmark models ad hoc queriesextract database with concurrent updatesmulti-user environmentWorkload consists of 17 queries and 2 update streamsSQL as written in specDatabase load time must be reportedDatabase is quantized into fixed sizesMetrics are Power (QppD), Throughput (QthD), and Price/Performance ($/QphD)Specification was approved April 5, 1995.
47 TPC-D Schema Customer Nation Region Order Supplier Part LineItem SF*150KNation25Region5OrderSF*1500KSupplierSF*10KPartSF*200KTime2557LineItemSF*6000KPartSuppSF*800KLegend:• Arrows point in the direction of one-to-many relationships.• The value below each table name is its cardinality. SF is the Scale Factor.• The Time table is optional. So far, not used by anyone.
48 TPC-D Database Scaling and Load Database size is determined from fixed Scale Factors (SF):1, 10, 30, 100, 300, 1000, (note that 3 is missing, not a typo)These correspond to the nominal database size in GB. (I.e., SF 10 is approx. 10 GB, not including indexes and temp tables.)Indices and temporary tables can significantly increase the total disk capacity. (3-5x is typical)Database is generated by DBGENDBGEN is a C program which is part of the TPC-D spec.Use of DBGEN is strongly recommended.TPC-D database contents must be exact.Database Load time must be reportedIncludes time to create indexes and update statistics.Not included in primary metrics.
49 TPC-D Query Set17 queries written in SQL92 to implement business questions.Queries are pseudo ad hoc:Substitution parameters are replaced with constants by QGENQGEN replaces substitution parameters with random valuesNo host variablesNo static SQLQueries cannot be modified -- “SQL as written”There are some minor exceptions.All variants must be approved in advance by the TPC
50 TPC-D Update Streams Update 0.1% of data per query stream About as long as a medium sized TPC-D queryImplementation of updates is left to sponsor, except:ACID properties must be maintainedUpdate Function 1 (UF1)Insert new rows into ORDER and LINEITEM tables equal to 0.1% of table sizeUpdate Function 2 (UF2)Delete rows from ORDER and LINEITEM tables equal to 0.1% of table size
51 TPC-D Execution Power Test Throughput Test Queries submitted in a single stream (i.e., no concurrency)Sequence:Throughput TestMultiple concurrent query streamsSingle update streamCacheFlushQuerySet 0(optional)UF1UF2Timed SequenceWarm-up, untimedQuery Set 1Query Set 2Query Set NUF1 UF2 UF1 UF2 UF1 UF2Updates:...
52 TPC-D Metrics Power Metric (QppD) Throughput (QthD) Geometric queries per hour times SFThroughput (QthD)Linear queries per hour times SF
53 TPC-D Metrics (cont.) Composite Query-Per-Hour Rating (QphD) The Power and Throughput metrics are combined to get the composite queries per hour.Reported metrics are:Power:Throughput:Price/Performance:Comparability:Results within a size category (SF) are comparable.Comparisons among different size databases are strongly discouraged.
56 Want to learn more about TPC-D? TPC-D Training VideoSix hour video by the folks who wrote the spec.Explains, in detail, all major aspects of the benchmark.Available from the TPC: Shanley Public Relations ph: (408) N. First St., Suite 600 fax: (408) San Jose, CA
57 Outline Introduction History of TPC TPC-A and TPC-B TPC-C TPC-D TPC Futures
58 ... TPC Future Direction TPC-Web The TPC is just starting a Web benchmark effort.TPC’s focus will be on database and transaction characteristics.The interesting components are:Web ServerWebServerAppl.FileSystemDBMS ServerSQLEngineStored ProcsDatabaseTCP/IPBrowser...
59 Rules of ThumbAnswer Set for TPC-C rules of Thumb (slide 38) a 10 ktpmC system» 8340 terminals ( = 5000 / 1.2)» warehouses ( = 8340 / 10)» 3GB-7GB DRAM of memory (10,000 * [3KB..7KB])» 650 GB disk space = 10,000 * 65» # Spindles depends on MB capacity vs. physical IO. Capacity: 650 / 4GB = 162 spindles IO: 10,000*.5 / 140 = 31 IO/sec (OK!)but 9GB or 23GB disks would be TOO HOT!
60 Reference MaterialJim Gray, The Benchmark Handbook for Database and Transaction Processing Systems, Morgan Kaufmann, San Mateo, CA, 1991.Raj Jain, The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling, John Wiley & Sons, New York, 1991.William Highleyman, Performance Analysis of Transaction Processing Systems, Prentice Hall, Englewood Cliffs, NJ, 1988.TPC Web site:Microsoft db site:IDEAS web site: