PASS Virtual Chapter presentation March 27, 2014.

Slides:



Advertisements
Similar presentations
Module 13: Performance Tuning. Overview Performance tuning methodologies Instance level Database level Application level Overview of tools and techniques.
Advertisements

new database engine component fully integrated into SQL Server 2014 optimized for OLTP workloads accessing memory resident data achive improvements.
1. SQL Server 2014 In-Memory by Design Arthur Zubarev June 21, 2014.
In-Memory Technologies Enhanced High Availability New Hybrid Scenarios In-Memory OLTP 5-25x performance gain for OLTP integrated into SQL Server In-Memory.
Database Architectures and the Web
Big Data Working with Terabytes in SQL Server Andrew Novick
6 SQL Server Integration Same manageability, administration & development experience Integrated queries & transactions Integrated HA and backup/restore.
SQL Server 2014 – Features Drilldown Tara Shankar Jana Senior Premier Field Engineer (Microsoft)
Dandy Weyn Sr. Technical Product Mkt.
Planning on attending PASS Summit 2014? The world’s largest gathering of SQL Server & BI professionals Take your SQL Server skills to the.
Microsoft Ignite /16/2017 3:29 PM
Meanwhile RAM cost continues to drop Moore’s Law on total CPU processing power holds but in parallel processing… CPU clock rate stalled… Because.
Boost Write Performance for DBMS on Solid State Drive Yu LI.
1 External Sorting for Query Processing Yanlei Diao UMass Amherst Feb 27, 2007 Slides Courtesy of R. Ramakrishnan and J. Gehrke.
Microsoft SQL Server x 46% 900+ For Hosting Service Providers
Evolving SQL Server for Modern Hardware Paul Larson, Eric N. Hanson, Mike Zwilling Microsoft plus to the many members of the Apollo and Hekaton teams Paul.
Copyright © 2013, Oracle and/or its affiliates. All rights reserved. 1 Preview of Oracle Database 12 c In-Memory Option Thomas Kyte
Applications hitting a wall today with SQL Server Locking/Latching Scale-up Throughput or latency SLA Applications which do not use SQL Server today.
Key Perf considerations & bottlenecks Windows Azure VM characteristics Monitoring TroubleshootingBest practices.
HBase A column-centered database 1. Overview An Apache project Influenced by Google’s BigTable Built on Hadoop ▫A distributed file system ▫Supports Map-Reduce.
SQL Server 2014: In In-memory OLTP for Database Developers.
SQL Server 2014: Overview Phil ssistalk.com.
Applications hitting a wall today with SQL Server Locking/Latching Scale-up Throughput or latency SLA Applications which do not use SQL Server.
SQL Server xVelocity memory optimized Columnstore Index Performance Tuning Rapinder Jawanda Sr. Program Manager Microsoft Corporation.
Sofia Event Center November 2013 Margarita Naumova
MapReduce and GFS. Introduction r To understand Google’s file system let us look at the sort of processing that needs to be done r We will look at MapReduce.
IN-MEMORY OLTP By Manohar Punna SQL Server Geeks – Regional Mentor, Hyderabad Blogger, Speaker.
Srik Raghavan Principal Lead Program Manager Kevin Cox Principal Program Manager SESSION CODE: DAT206.
Meet Kevin Liu Principal Lead Program Manager Kevin Liu has been with Microsoft and the SQL Server engine team for 7 years, working on key projects like.
Ἑ κατόν by Niko Neugebauer. Niko Neugebauer PASS EvangelistPASS Evangelist SQL Server MVPSQL Server MVP SQLPort ( founder & leaderSQLPort.
Moore’s Law means more transistors and therefore cores, but… CPU clock rate stalled… Meanwhile RAM cost continues to drop.
Buffer Pool Memory Optimized Tables Available Memory Buffer Pool Memory Optimized Tables Buffer Pool Memory Optimized Tables Buffer Pool Memory.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition File System Implementation.
Infrastructure for Data Warehouses. Basics Of Data Access Data Store Machine Memory Buffer Memory Cache Data Store Buffer Bus Structure.
Sofia Event Center November 2013 Margarita Naumova SQL Master Academy.
Cloudera Kudu Introduction
CS 440 Database Management Systems Lecture 6: Data storage & access methods 1.
CS 540 Database Management Systems
Technology Drill Down: Windows Azure Platform Eric Nelson | ISV Application Architect | Microsoft UK |
5 Trends in the Data Warehousing Space Source: TDWI Report – Next Generation DW.
In-Memory OLTP The faster is now simpler in SQL Server 2016.
Vedran Kesegić. About me  M.Sc., FER, Zagreb  HRPro d.o.o. Before: Vipnet, FER  13+ years with SQL Server (since SQL 2000)  Microsoft Certified.
What Should a DBMS Do? Store large amounts of data Process queries efficiently Allow multiple users to access the database concurrently and safely. Provide.
SQL Server 2014: In-Memory OLTP Adoption Considerations Mike
Redmond Protocols Plugfest 2016 Jos de Bruijn, Borko Novakovic SQL In-Memory OLTP Senior Program Manager.
Doing fast! Optimizing Query performance with ColumnStore Indexes in SQL Server 2012 Margarita Naumova | SQL Master Academy.
Best Practices for Columnstore Indexes Warner Chaves SQL MCM / MVP SQLTurbo.com Pythian.com.
Introducing Hekaton The next step in SQL Server OLTP performance Mladen Prajdić
Use Cases for In-Memory OLTP Warner Chaves SQL MCM / MVP SQLTurbo.com Pythian.com.
Memory-Optimized Tables Querying at the speed of light.
Enable Operational Analytics (HTAP) in SQL Server 2016 and Azure SQL Database Sunil Agarwal Principal Program Manager, SQL Server Product Tiger Team
In-Memory Capabilities
In-Memory OLTP for DBA Management
SQL Server In-Memory OLTP: What Every SQL Professional Should Know
7/17/2018 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
TechEd /6/2018 7:34 PM © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks.
SQL Server 2014 In-Memory OLTP
SQL Server 2014 In-Memory Overview
මොඩියුල විශ්ලේෂණය Buffer Pool Extension භාවිතය.
Blazing-Fast Performance:
In-memory OLTP for the Database Administrator
Real world In-Memory OLTP
SQL 2014 In-Memory OLTP What, Why, and How
11/29/2018 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
1/3/2019 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
Microsoft Ignite /1/ :19 PM
In-Memory OLTP for Database Developers
Sunil Agarwal | Principal Program Manager
SQL Server 2014: In-Memory OLTP Overview
CS222/CS122C: Principles of Data Management UCI, Fall 2018 Notes #03 Row/Column Stores, Heap Files, Buffer Manager, Catalogs Instructor: Chen Li.
Presentation transcript:

PASS Virtual Chapter presentation March 27, 2014

In-Memory Technologies Enhanced High Availability New Hybrid Scenarios In-Memory OLTP 5-25X performance gain for OLTP integrated into SQL Server In-Memory DW 5-25X performance gain and high data compression Updatable and clustered SSD Buffer Pool Extension 4-10X of RAM and up to 3X performance gain transparently for apps Always On Enhancements Increased availability and improved manageability of active secondaries Online Database Operations Increased availability for index/partition maintenance Backup to Azure Easy to implement and cost effective Disaster Recovery solution to Azure Storage HA to Azure VM Easy to implement and cost effective high availability solution with Windows Azure VM Deploy to Azure Deployment wizard to migrate database Better together with Windows Server WS2012 ReFS support Online resizing VHDx Hyper-V replica Windows “Blue” support Extending Power View Enable Power View on existing analytic models and support new multi- dimensional models. Other investments

In-Memory Technologies In-Memory OLTP 5-25X performance gain for OLTP integrated into SQL Server In-Memory DW 5-30X performance gain and high data compression Updatable and clustered SSD Buffer Pool Extension 4-10X of RAM and up to 3X performance gain transparently for apps

Market need for higher throughput and predictable lower latency OLTP at a lower cost Hardware trends demand architectural changes on RDBMS In-Memory OLTP is:  High performance,  Memory-optimized OLTP engine,  Integrated into SQL Server and  Architected for modern hardware trends

Decreasing RAM cost Moore’s Law on total CPU processing power holds but in parallel processing… CPU clock rate stalled…

SQL Server Integration Same manageability, administration & development experience Integrated queries & transactions Integrated HA and backup/restore Main-Memory Optimized Direct pointers to rows Indexes exist only in memory No buffer pool No write-ahead logging Stream-based storage High Concurrency Multi-version optimistic concurrency control with full ACID support Lock-free data structures No locks, latches or spinlocks No I/O during transaction T-SQL Compiled to Machine Code T-SQL compiled to machine code leveraging VC compiler Procedure and its queries, becomes a C function Aggressive compile-time Steadily declining memory price, NVRAM Many-core processors Stalling CPU clock rateTCO Hardware trends Business Hybrid engine and integrated experience High performance data operations Frictionless scale-up Efficient, business- logic processing Customer Benefits Hekaton Tech Pillars Drivers

Memory-optimized Table Filegroup Data Filegroup SQL Server.exe Hekaton Engine: Memory_optimized Tables & Indexes TDS Handler and Session Management Native- Compiled SPs and Schema Buffer Pool Execution Plan cache for ad-hoc T-SQL and SPs Application Transaction Log Query Interop Non-durable Table T1 T3 T2 T1 T3 T2 T1 T3 T2 T1 T3 T2 Table s Indexe s T-SQL Interpreter T1 T3 T2 T1 T3 T2 Access Methods Parser, Catalog, Optimizer Hekaton Compiler Hekaton Component Key Existing SQL Componen t Generated.dll 20-40x more efficient Real Apps see 2-30x Reduced log contention; Low latency still critical for performance Checkpoints are background sequential IO No V1 improvements in comm layers

Despite 20 years of optimizing for existing OLTP benchmarks – we still get 2x on a workload derived from TPC-C Apps that take full advantage: e.g. web app session state Apps with periodic bulk updates & heavy random reads Existing apps typically see 4-7x improvement

Demo Table and Stored Procedure Performance Gain

Row header Payload (table columns) Begin Ts End Ts StmtId IdxLinkCount 8 bytes 4 bytes (padding) bytes 8 bytes * (IdxLinkCount)

50, ∞ JohnParis TimestampsNameChain ptrsCity Hash index on Name Transaction 100: UPDATE City = ‘Prague’ where Name = ‘John’ No locks of any kind, no interference with transaction , ∞ JohnPrague 90, ∞ Susan Bogota f(John) 100 Transaction 99: Running compiled query SELECT City WHERE Name = ‘John’ Simple hash lookup returns direct pointer to ‘John’ row Background operation will unlink and deallocate the old ‘John’ row after transaction 99 completes. Hekaton Principle: Performance like a cache Functionality like a RDMBS Note: HANA still use 16KB pages for its row store (optimized for disk IO)

PAGE Page Mapping Table PAGE , ∞ 1 50, Root Non-leaf pages leaf pages Data rows PageID-0 PageID-3 PageID-2 PageID -14 Page size- up to 8K. Sized to the row Logical pointers Indirect physical pointers through Page Mapping table Page Mapping table grows (doubles) as table grows Sibling pages linked one direction Require two indexes for ASC/DSC No in-place updates on index pages Handled thru delta pages or building new pages No covering columns (only the key is stored) Key Logical Physical 100,200 1

Buffer Pool Memory Optimized Tables Available Memory Buffer Pool Memory Optimized Tables Buffer Pool Memory Optimized Tables Buffer Pool Memory Optimized Tables

Data File Delta File 0100 TS (ins) RowId TableId TS (ins) RowId TableId TS (ins) RowId TableId TS (ins) RowId TS (del) TS (ins) RowId TS (del) TS (ins) RowId TS (del) Checkpoint File Pair Row pay load Transaction Timestamp Range Data file contains rows inserted within a given transaction range

Populating Data/Delta files via sequential IO only Offline Checkpoint Thread Memory-optimized Table Filegroup Range Range Range Range Range 500- New Inserts Delete 450 TS Delete 250 TS Delete 150 TS Data file with rows generated in timestamp range IDs of Deleted Rows (height indicates % deleted) Del Tran2 (TS 450) Del Tran3 (TS 250) Del Tran1(TS150 ) Insert into Hekaton T1 Log in disk Table Del Tran1 (row TS150) Del Tran2 (row TS 450) Del Tran3 (row TS 250) Insert into T1 SQL Transaction log (from LogPool) Data file has pre- allocated size (128 MB or 16 MB on smaller systems) Engine switches to new data file when the current file is full Transaction does not span data files Once a data file is closed, it becomes read- only Row deletes are tracked in delta file Files are append only

Memory-optimized data Filegroup Files as of Time 600 Range Range Range Range Data file with rows generated in timestamp range IDs of Deleted Rows (height indicates % deleted) Merge Deleted Files Files Under Merge Files as of Time 500 Memory-optimized data Filegroup Range Range Range Range Range Range Range Range

Delta map Recovery Data Loader Delta File1 Memory Optimized Tables Recovery Data Loader Delta map Data File1 Delta File2 Data File2 Delta File3 Data File3 filter Memory Optimized Container - 1Memory Optimized Container - 2

Optimal for V1Not Optimal for V1 Business logicIn database (as SPs)In mid-tier Latency and contention locations Concentrated on a sub-set of tables/SPs Spread across the database Client server communicationLess frequentChatty T-SQL surface areaBasicComplex Log IONot limiting factorLimiting factor Data sizeHot data fit in memoryUnbounded hot data size

Overview of In-Memory DW

… C1 C2 C3 C5C4 27 Benefits: Improved compression: Data from same domain compress better Reduced I/O: Fetch only columns needed Improved Performance: More data fits in memory Can exceed memory size (not so for HANA) Data stored as rows Columnstore internals Data stored as columns

OrderDateKeyProductKeyStoreKeyRegionKeyQuantitySalesAmount Columnstore Index Example

OrderDateKeyProductKeyStoreKeyRegionKeyQuantitySalesAmount OrderDateKeyProductKeyStoreKeyRegionKeyQuantitySalesAmount Horizontally Partition (create Row Groups) ~1M rows

OrderDateKey ProductKey StoreKey RegionKey Quantity SalesAmount OrderDateKey ProductKey StoreKey RegionKey Quantity SalesAmount Vertically Partition (create Segments)

OrderDateKey ProductKey StoreKey RegionKey Quantity SalesAmount OrderDateKey ProductKey StoreKey RegionKey Quantity SalesAmount Compress Each Segment

OrderDateKey ProductKey StoreKey RegionKey Quantity SalesAmount OrderDateKey ProductKey StoreKey RegionKey Quantity SalesAmount Read The Data Column Elimination Segment Elimination

Table consists of column store and row store DML (update, delete, insert) operations leverage delta store INSERT Values Always lands into delta store DELETE Logical operation Data physically remove after REBUILD operation is performed. UPDATE DELETE followed by INSERT. BULK INSERT if batch < 100k, inserts go into delta store, otherwise columnstore SELECT Unifies data from Column and Row stores - internal UNION operation. “Tuple mover” converts data into columnar format once segment is full (1M of rows) REORGANIZE statement forces tuple mover to start. C1 C2 C3 C5C6C4 Column Store C1 C2 C3 C5C6C4 Delta (row) store tuple mover

Demo CCI (Clustered Columnstore Index)

Overview of Buffer Pool Extension

Goal: Use nonvolatile storage devices for increasing amount of memory available for buffer pool consumers. This allows usage of SSD as an intermediate buffer pool pages thus getting price advantage over the memory increase.

Use of cheaper SSD to reduce SQL memory pressure No risk of data loss – only clean pages are involved Observed up to 3x performance improvement for OLTP workloads during the internal testing Available on SQL Server Enterprise and STD (memory doubled in 2014) Sweet spot machine size:  High throughput and endurance SSD storage (ideally PCI-E) sized 4x-10x times of RAM size Not optimized for DW workloads * Not applicable to In-memory OLTP * DW workload data set is so large that table scan will wash out all the cached pages.

ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION { ON ( FILENAME = 'os_file_path_and_name', SIZE = [ KB | MB | GB ] ) | OFF } SSD Buffer Pool DB Clean Page Dirty Page ɸ Evict Clean Page

CustomerWorkloadResults Edgenet – SaaS provider for retailers and product delivery for end consumers Availability Project: Provide real-time insight into product price/availability for retailers and end- consumers. Used by retailers in-stores and to end- consumers via search engines. 8x-11x in performance gains for ingestion of data Consolidated from multi-tenant, multi- server to single database/server. Removed application caching layer (additional latency) from client tier. Case StudyCase Study BWin.party - Largest regulated online gaming site Running ASP.NET session state using SQL Server for repository Critical to end-user performance and interaction with the site Went from 15,000 batch request/sec per SQL Server to over 250,000. Lab testing achieved over 450,000 batch requests/sec Consolidate 18 SQL Server instances to 1. Case StudyCase Study SBI Liquidity Market - Japanese Foreign currency exchange trading platforms. Includes high volume and low latency trading. Expecting 10x volume increase System had latency (up to 4 sec) at scale Goal is improved throughput and under 1sec latency Redesigned application for In-Memory OLTP, 2x throughput (get over 3x in testing) and reduced latency from 4 seconds to 1 per business transaction. Case studyCase study 3/36/2014Microsoft Confidential40