Presentation is loading. Please wait.

Presentation is loading. Please wait.

SQL Server 2016 In-Memory OLTP for the DBA

Similar presentations


Presentation on theme: "SQL Server 2016 In-Memory OLTP for the DBA"— Presentation transcript:

1 SQL Server 2016 In-Memory OLTP for the DBA

2 Justin Randall Senior Consultant, SentryOne
Blog: Over 30 years experience as a data professional, including database administration, data analysis, database design, enterprise data architecture administration, strategic information planning, and development and implementation of IS architectures and methodologies. 19 years experience with SQL Server. 6 years at SentryOne as a Professional Services consultant providing product customizations, training, mentoring, and other customer success activities

3 Agenda Memory Optimized Tables Basics Managing Memory Managing Storage Backup, Restore, and Recovery The DBAs Checklist In-memory OLTP offers the potential for significantly faster transaction processing by introducing a specialized memory-optimized relational data management engine and native stored procedure compiler. Before attempting to implement in-memory OLTP, DBAs and Developers need to gain a thorough understanding of this new technology, how to plan an implementation, it's affect on server resources, how to monitor resource utilization and performance, how to recover from situations peculiar to the technology (out of memory scenarios), and the impact on backup, restore, and recovery processes. This technology is designed for certain use cases that designers and implementers should become familiar with. Read the white paper In-Memory OLTP – Common Workload Patterns and Migration Considerations In-Memory OLTP is a big topic that cannot be fully addressed in a one-hour session. Today, we are going to look at a few essential aspects of this technology.

4 In-Memory OLTP Requirements
SQL Server 2016 SP1 (or later), any edition. Enough memory to hold the data in memory-optimized tables and indexes, as well as additional memory to support the online workload. When running SQL Server in a VM, ensure there is enough memory allocated to the VM to support the memory needed for memory-optimized tables and indexes. Free disk space that is 2X the size of your durable memory-optimized tables. Processors need to support the instruction cmpxchg16b. All modern 64-bit processors support cmpxchg16b. For SQL Server 2014 and SQL Server 2016 RTM (pre-SP1) you need Enterprise, Developer, or Evaluation edition. Depending on the VM host application, the configuration option to guarantee memory allocation for the VM could be called Memory Reservation or, when using Dynamic Memory, Minimum RAM. Make sure these settings are sufficient for the needs of the databases in SQL Server.

5 Memory-Optimized Tables
Definition Typical Target Workload Row-based, latch and lock-free structures in the SQL Server engine, designed for transactional system performance gains. Two types of memory- optimized tables: durable (persisted on disk over restarts) non-durable (volatile, like a global temp table). Transaction Processing in an OLTP System Large volume of low latency write transactions. Hot data vs cold data rarely accessed. Optimizing #temp table usage. Data in memory-optimized tables resides fully in memory, so pages do not need to be read into cache from disk when the memory-optimized tables are accessed, All the data is stored in memory, all the time. Operations on memory-optimized tables use the same transaction log that is used for operations on disk-based tables, and as always, the transaction log is stored on disk. In case of a system crash or server shutdown, the rows of data in Durable memory-optimized tables can be recreated from the checkpoint files and the transaction log . Memory-optimized tables provide highly optimized data access structures using hash and nonclustered ordered indexes Data access and transaction isolation are handled through a multi-version, concurrency control mechanism that provides an optimistic, non-blocking implementation. While implemented differently from traditional RDBMS, In-Memory OLTP still provides ACID compliance (Atomic, Consistent, Isolated, Durable)

6 Memory-Optimized Tables
Speed Gains Achieved Thru: Optimistic Concurrency (row versioning) No Locks or Latches Natively Compiled Modules (optional): stored procedures, triggers, and user-defined scalar functions Durability Options: Schema Only (non-durable) Schema and Data (durable) Delayed Durability Row Versioning: Pessimistic concurrency in disk-based tables means locking, and blocking. Locking ensures data consistency – only one transaction can change a piece of data at a time. Creating row versions switches the concurrency model from pessimistic to optimistic, resolving contention issues for readers and writers. This is achieved by using a process called Multi-Version-Concurrency-Control, which allows queries to see data as of a specific point in time – the view of the data is consistent, and this level of consistency is achieved by creating and referencing row versions. For memory-optimized tables, row versions are stored in memory. INSERTS and UPDATES create row versions, which consume memory. UPDATES are really a logical DELETE plus an INSERT Durability: Memory-optimized tables are fully durable by default, and, fully durable transactions on memory-optimized tables are fully atomic, consistent, isolated, and durable (ACID). Memory-optimized tables and natively compiled stored procedures support a subset of Transact-SQL. The primary store for memory-optimized tables is main memory; memory-optimized tables reside in memory. Rows in the table are read from and written to memory. The entire table resides in memory. A second copy of the table data is maintained on disk, but only for durability purposes. Data in memory-optimized tables is only read from disk during database recovery. For example, after a server restart. The data in Non-durable memory-optimized tables is not durable, i.e. will not survive a SQL Server restart or AG failover. The schema will be recreated after one of these events. Nonetheless, operations on these tables meet all other transactional requirements (atomic, consistent, isolated) For even greater performance gains, In-Memory OLTP supports durable tables with transaction durability delayed. Delayed durable transactions are saved to disk soon after the transaction has committed and returned control to the client. In exchange for the increased performance, committed transactions that have not saved to disk are lost in a server crash or fail over.

7 Memory Optimized Table Limitations
Cross-database transactions not allowed Cannot access linked servers CHECKDB & CHECKTABLE ignore in-memory tables Bulk logging and minimal logging not supported Maximum of 8 indexes on a single table FK constraints must reference a PK, not a unique constraint. The referenced table must also be memory-optimized Legacy LOB data types (text, ntext, image) as well as columns of type XML or CLR are not allowed IDENTITY columns SEED and INCREMENT must equal 1 DML triggers must be created as natively compiled modules Microsoft continues to reduce the number of limitations with each new SQL Server release.

8 In-Mem OLTP 2016 Table Improvements
Use ALTER TABLE to: Add and Drop columns, indexes, and constraints Modify column definitions Change the number of hash buckets in a hash index Most ALTER TABLE operations can be multi-threaded and are log-optimized Columnstore Indexes are supported Create memory-optimized table types (table variables) Stored only in memory Not stored in tempdb, do not use any tempdb resources Avoid contention on database PFS and SGAM pages Each ALTER statement causes a complete rebuild of the table If multiple changes need to be made to a single table, changes of same type can be combined into a single ALTER TABLE command. For example, you can ADD multiple columns, indexes and constraints in a single ALTER TABLE and you can DROP multiple columns, indexes and constraints in a single ALTER TABLE, but you cannot have an ADD and a DROP in the same ALTER TABLE command. It is recommended that you combine changes wherever possible, so that you can keep the number of table rebuilds to a minimum.

9 Memory-Optimized Table Indexes
Basics Exist Only in Active Memory Rebuilt when the database is brought back online SQL UPDATE statements that change indexes are not logged Entries in a memory-optimized index contain a direct memory address to the row in the table Memory-optimized indexes have no fixed pages No traditional fragmentation within a page, so no fill factor Every memory-optimized table must have at least one index. if it is a durable table (specified with the option SCHEMA_AND_DATA) it must have a declared primary key, which could then be supported by the required index. Nature of memory-optimized indexes On a memory-optimized table, every index is also memory-optimized. There are several ways in which an index on a memory-optimized index differs from a traditional index on a disk-base table. Each memory-optimized index exists only in active memory. The index has no representation on the disk. Memory-optimized indexes are rebuilt when the database is brought back online. When an SQL UPDATE statement modifies data in a memory-optimized table, corresponding changes to its indexes are not written to the log. The entries in a memory-optimized index contain a direct memory address to the row in the table. In contrast, entries in a traditional B-tree index on disk contain a key value that the system must first use to find the memory address to the associated table row. Memory-optimized indexes have no fixed pages as do disk-based indexes. They do not accrue the traditional type of fragmentation within a page, so they have no fillfactor.

10 Memory-Optimized Table Indexes
Requirements Each CREATE TABLE statement must include between 1 and 8 clauses to declare indexes. Must be one of: Hash Index Nonclustered index (default internal B-tree structure) Durable tables must have a declared primary key Index supporting primary key is the only unique index Do not have time for an in-depth discussion on hash indexes vs Nonclustered indexes. This is an important topic however, so read the documentation to ensure you understand it thoroughly.

11 Demo Memory-Optimized Tables Walk through the steps: Create DB
Alter DB – Create FileGroup Create Memory-optimized tables Indexes? Demo performance

12 Memory Consumption Memory-Optimized Tables Reside Fully in Memory
data access at speed of memory means fast – Yeah!! modifications to data create row versions, which consume more memory - Uh Oh!! in-memory objects compete for memory with data (buffer pool), plan caches, and internal structures for server memory space no limit on size of in-memory tables in SQL 2016! SQL Server In-Memory OLTP consumes memory in different patterns than disk-based tables. You can monitor the amount of memory allocated and used by memory-optimized tables and indexes in your database using the DMVs or performance counters provided for memory and the garbage collection subsystem. This gives you visibility at both the system and database level and lets you prevent problems due to memory exhaustion.

13 Memory Consumption Memory Allocation Over Time
It is important to monitory memory consumption by in-memory tables. As this graphic shows, as in-memory tables grow, they can reduce the amount of memory available to the buffer pool, which holds live data for disk-based tables. Known strategies exist to monitor and manage memory consumption by in-memory objects

14 Memory Requirements Calculate estimated memory consumption
memory for the table memory for the indexes hash indexes and non-clustered indexes memory for row-versioning memory for table variables memory for growth The size of a memory-optimized table corresponds to the size of data plus some overhead for row headers. When migrating a disk-based table to memory-optimized, the size of the memory-optimized table will roughly correspond to the size of the clustered index or heap of the original disk-based table. Indexes on memory-optimized tables tend to be smaller than nonclustered indexes on disk-based tables. The size of nonclustered indexes is in the order of [primary key size] * [row count]. The size of hash indexes is [bucket count] * 8 bytes. When there is an active workload, additional memory is needed to account for row versioning and various operations. How much memory is needed depends on the workload, but to be safe the recommendation is to start with two times the expected size of memory-optimized tables and indexes, and observe what the memory requirements are in practice. The overhead for row versioning always depends on the characteristics of the workload - especially long-running transactions increase the overhead. For most workloads using larger databases (e.g., >100GB), overhead tends to be limited (25% or less). Row Versions for memory-optimized table variables ARE NOT handled by the Garbage Collection process. Memory is released when the table variable goes out of scope

15 Memory Management Strategies
Do the math! Bind the database to a resource pool Monitor and troubleshoot memory usage Consider Application-level partitioning for larger tables A resource pool represents a subset of physical resources that can be governed. By default, SQL Server databases are bound to and consume the resources of the default resource pool. To protect SQL Server from having its resources consumed by one or more memory-optimized tables, and to prevent other memory users from consuming memory needed by memory-optimized tables, create a separate resource pool to manage memory consumption for the database with memory-optimized tables. A database can be bound on only one resource pool. However, you can bind multiple databases to the same pool. SQL Server 2016 includes several DMVs: sys.dm_db_xtp_table_memory_stats, sys.dm_os_memory_clerks, sys.dm_db_xtp_table_memory_stats, sys.dm_db_xtp_table_memory_consumers You can emulate partitioned tables with memory-optimized tables by maintaining a partitioned table and a memory-optimized table with a common schema. Current data would be inserted and updated in the memory-optimized table, while less-frequently accessed data would be maintained in the traditional partitioned table. An application that knows that the active data is in a memory-optimized table can use natively compiled stored procedures to access the data. For operations that need to access the entire span of data, or which may not know which table holds relevant data, use interpreted Transact-SQL to join the memory-optimized table with the partitioned table.

16 Monitoring Memory Consumption
SSMS Reports Consumption at the database level DMVs sys.dm_db_xtp_table_memory_stats: user tables, indexes & system objects sys.dm_xtp_system_memory_consumers: internal system structures sys.dm_os_memory_objects: run-time structures sys.dm_os_memory_clerks: In-Memory OLTP Engine Internal structures include: transactional structures, buffers for data and delta files, garbage collection structures Demo these queries: -- memory optimized tables and indexes: SELECT object_name(object_id) AS Name , * FROM sys.dm_db_xtp_table_memory_stats -- internal structures SELECT memory_consumer_desc , allocated_bytes/1024 AS allocated_bytes_kb , used_bytes/1024 AS used_bytes_kb , allocation_count FROM sys.dm_xtp_system_memory_consumers -- run-time structures SELECT memory_object_address , pages_in_bytes , bytes_used , type FROM sys.dm_os_memory_objects WHERE type LIKE '%xtp%' -- this DMV accounts for all memory used by the hek_2 engine SELECT type , name , memory_node_id , pages_kb/1024 AS pages_MB FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%'

17 Managing Storage File & File Group configuration Storage Capacity
Storage Throughput (IOPs) Data Compression is not supported Enable Instant File Initialization Files & File Groups: Memory-Optimized tables are mapped to a memory-optimized data filegroup, similar to Filestream. You cannot control how this works Files consist of Data and Delta Files. The data for memory optimized tables is stored in one or more data and delta file pairs (also called a checkpoint file pair, or CFP). Data files store inserted rows and delta files reference deleted rows. During the execution of an OLTP workload, as the DML operations update, insert, and delete rows, new CFPs are created to persist the new rows, and the reference to the deleted rows is appended to delta files. Over time, with DML operations, the number of data and delta files grow, causing increased disk space usage and increased recovery time..   To help prevent these inefficiencies, the older closed data and delta files are merged, based on a merge policy Changes are stored by appending to active files. Reading and writing is sequential Storage Capacity Microsoft recommends provisioning disk space four times the size of durable, in-memory tables (excluding indexes) IOPs: All changes made to disk-based tables or durable memory-optimized tables are captured in one or more transaction log records. When a transaction commits, SQL Server writes the log records associated with the transaction to disk before communicating to the application or user session that the transaction has committed. This guarantees that changes made by the transaction are durable. The transaction log for memory-optimized tables is fully integrated with the same log stream used by disk-based tables. This integration allows existing transaction log backup, recover, and restore operations to continue to work without requiring any additional steps. However, since In-Memory OLTP can increase transaction throughput of your workload significantly, log IO may become a performance bottleneck. To sustain this increased throughput, ensure the log IO subsystem can handle the increased load. Microsoft recommends supporting a 3 times increase in transaction log throughput Another factor in estimating the IOPS for storage is the recovery time for memory-optimized tables. Data from durable tables must be read into memory before a database is made available to applications. Commonly, loading data into memory-optimized tables can be done at the speed of IOPS. So if the total storage for durable, memory-optimized tables is 60 GB and you want to be able to load this data in 1 minute, the IOPS for the storage must be set at 1 GB/sec. IFI: Without Instant File Initialization, memory-optimized storage files (data and delta files) will be initialized upon creation, which can have negative impact on the performance of your workload

18 Backup, Restore & Recovery
Full, Differential, and Transaction Log backups fully support databases with durable in-memory tables The size of full backups is typically larger than its size in memory, but smaller than on-disk storage Piecemeal restores are supported Memory-optimized tables must be loaded into memory before the database is available for use, increasing recovery time During recovery or restore operations, the In-Memory OLTP engine reads data and delta files for loading into physical memory. The load time is determined by: The amount of data to load. Sequential I/O bandwidth. Degree of parallelism, determined by number of file containers and processor cores. The amount of log records in the active portion of the log that need to be redone. When the SQL Server restarts, each database goes through a recovery phase that consists of the following three phases: The analysis phase. During this phase, a pass is made on the active transaction logs to detect committed and uncommitted transactions. The In-Memory OLTP engine identifies the checkpoint to load and preloads its system table log entries. It will also process some file allocation log records.   The redo phase. This phase is run concurrently on both disk-based and memory-optimized tables. For disk-based tables, the database is moved to the current point in time and acquires locks taken by uncommitted transactions. For memory-optimized tables, data from the data and delta file pairs are loaded into memory and then update the data with the active transaction log based on the last durable checkpoint. When the above operations on disk-based and memory-optimized tables are complete, the database is available for access.   The undo phase. In this phase, the uncommitted transactions are rolled back. Loading memory-optimized tables into memory can affect performance of the recovery time objective (RTO). To improve the load time of memory-optimized data from data and delta files, the In-Memory OLTP engine loads the data/delta files in parallel

19 In-Memory OLTP Checklist
Consider workload patterns benefitting from in-memory OLTP Carefully calculate memory requirements Plan storage requirements Account for impact on database restore and recovery Account for impact on database portability

20 Shameless Plug Engage with our community: Facebook: Website: Share your tough SQL Server performance problems with us: Expert SQL Server Performance Tips: Engage our Professional Services Group: sentryone.com/proservices

21 Resources SQL Server 2016 In-memory OLTP (Books Online)
SQL Server In-Memory OLTP Internals for SQL Server 2016 In-Memory OLTP – Common Workload Patterns and Migration Considerations: Quick Start 1: In-Memory OLTP Technologies for faster T-SQL Performance Memory-Optimized Tables Managing Memory for In-Memory OLTP

22 Resources Creating and Managing Storage for Memory-Optimized Objects
Backup, Restore, and Recovery of Memory Optimized Tables SQL Server Support for In-Memory OLTP Migrating to In-Memory OLTP In-Memory OLTP posts on Ned Otter's blog


Download ppt "SQL Server 2016 In-Memory OLTP for the DBA"

Similar presentations


Ads by Google