Presentation is loading. Please wait.

Presentation is loading. Please wait.

InMemory improvements on SQL Server 2016

Similar presentations


Presentation on theme: "InMemory improvements on SQL Server 2016"— Presentation transcript:

1 InMemory improvements on SQL Server 2016
Sam Mesel Sr. SQL Premier Field Engineer Twitter LinkedIn: sam mesel InMemory improvements on SQL Server 2016

2 Demo What does 6, 7, … 10+ X mean?

3 Considerations for using InMemory objects

4 Memory Considerations
Data from Memory-Optimized OLTP objects reside in memory at all times Configure SQL server with sufficient memory to store memory-optimized tables Failure to allocate memory will fail transactional workload at run-time Other SQL workloads can slow down to unacceptable performance Memory Management Limit memory consumption using Resource Governor Monitor memory consumption/availability using DMVs, Performance Monitor (PerfMon) Freeing memory is not synchronous in most cases Sizing Table Size, rule of thumb: 2 * size of data Index Size Hash Indexes : 8 * [actual bucket count] NonClustered index : variable size

5 Can InMemory objects cause problems?
Available Memory Memory Optimized Tables Memory Optimized Tables Memory Optimized Tables Memory Optimized Tables Max Server Memory Memory Internal Structures Memory Internal Structures Memory Internal Structures SQL Server In-Memory OLTP can use potentially more memory and uses memory and in different ways than does SQL Server. In-Memory OLTP allocates memory separate from the Buffer Pool and as the memory optimized table size grows, it is possible that the amount of memory you installed becomes inadequate for your growing needs. If so, you could run out of memory. Obviously, it is best to not get into a low memory or OOM (Out of Memory) situation. Good planning and monitoring can help avoid OOM situations. Still, the best planning does not always foresee what actually happens and you might end up with low memory or OOM. There are two steps to recovering from OOM: Open a DAC (Dedicated Administrator Connection) Take corrective action such as freeing up existing memory or deleting non-essential memory optimized table rows and wait for garbage collection. The garbage collector returns the memory used by these rows to available memory. In-Memory OLTP engine collects garbage rows aggressively. However, a long running transaction can prevent garbage collection. For example, if you have a transaction that runs for 5 minutes, any row versions created due to update/delete operations while the transaction was active can’t be garbage collected. Ultimately the best solution, if possible, is to install additional physical memory Buffer Pool Buffer Pool Memory Internal Structures Buffer Pool Buffer Pool

6 Possible Scenarios Scenario Symptom Diagnosis Solution
Inserting more rows than rows that can fit in memory Transactions start failing Read error log Identify via DMVs, SSMS whether In Memory OTLP is using most memory Free up memory Add memory Identify and stop long running transactions Recovering database that does not fit in memory Database does not come online Memory pressure due to In-Memory OLTP on other workloads Operations in other workloads start failing

7 Memory monitoring: Standard Tools
SQL Server provides built-in standard reports to help monitoring memory consumption by memory-optimized tables Reports  Standard Reports  Memory Usage by Memory Optimized Objects

8 Memory monitoring: DMVs
There are several DMVs to provide memory consumption Memory by memory-optimized tables and indexes: sys.dm_db_xtp_table_memory_stats: Memory by Internal system structures sys.dm_xtp_system_memory_consumers Memory at run-time when accessing memory-optimized tables sys.dm_os_memory_objects Memory by In-memory engine across the instance sys.dm_os_memory_clerks Monitor and Troubleshoot Memory Usage sys.dm_db_xtp_table_memory_stats sys.dm_xtp_system_memory_consumers sys.dm_os_memory_objects sys.dm_os_memory_clerks

9 Maximum Memory for InMemory
Recommendation is to keep In-Memory OLTP objects using less than 2 TB (from 256 GB in SQL 2014) There is no longer the limitation on data size Recommendation is to keep In-Memory OLTP objects using less than 2 TB (from 256 GB in SQL 2014) This limit is being lifted due to the number of Checkpoint Files (used during recovery process)

10 Microsoft recommendations for InMemory
Implement Resource Governor to limit memory consumption Bind a Database with Memory-Optimized Tables to a Resource Pool

11 InMemory Tables

12 InMemory Tables Are fully durable by default. Transactions involving Memory-Optimized tables are Atomic, Consistent, Isolated, and Durable (ACID) Can be accessed via: Interop T-SQL allows for full range of SQL syntax Natively compiled Stored Procedures support a subset of it Primary store for memory-optimized tables is main memory. All operations on these object happen in memory (entire table resides in memory). A Second copy of the table is maintained on disk, for durability purposes. SQL supports also Non-Durable memory-optimized tables no logged their data is not persisted. For greater performance gains, Memory-Optimized tables support durable tables with Transaction Durability Delayed. Introduction to Memory-Optimized Tables

13 Internal Structures Rows Row structure is optimized for memory access
There is no concept of Pages Rows are Versioned and there are no in-place updates Indexes Every memory-optimized table is required to have at least 1 index Indexes do not exist on disk. Only in memory. Recreated during recovery / restart Index types: Hash indexes for point lookups Range indexes (non-clustered) for ordered scans and Range Scans There is no clustered Index Indexes point to rows, access to rows is via an index Memory-optimized tables are fully durable by default, and, like transactions on (traditional) disk-based tables, fully durable transactions on memory-optimized tables are fully atomic, consistent, isolated, and durable (ACID). Memory-optimized tables and natively compiled stored procedures support a subset of Transact-SQL. SQL Server features are supported to meet the requirements of OLTP applications that require optimal performance. The primary store for memory-optimized tables is main memory; memory-optimized tables reside in memory. Rows in the table are read from and written to memory. The entire table resides in memory. A second copy of the table data is maintained on disk, but only for durability purposes. Data in memory-optimized tables is only read from disk during database recovery. For example, after a server restart. Rows in memory-optimized tables are versioned. This means that each row in the table potentially has multiple versions. All row versions are maintained in the same table data structure. Row versioning is used to allow concurrent reads and writes on the same row. Indexes on memory-optimized tables are not stored as traditional B-trees. Memory-optimized tables support hash indexes, stored as hash tables with linked lists connecting all the rows that hash to the same value and range indexes, which for memory-optimized tables are stored using special Bw-trees. Every memory-optimized table must have at least one index, because it is the indexes that combine all the rows into a single table. Memory-optimized tables are never stored as unorganized sets of rows, like a disk-based table heap is stored. Indexes are never stored on disk, and are not reflected in the on-disk checkpoint files, and operations on indexes are never logged. The indexes are maintained automatically during all modification operations on memory-optimized tables, just like b-tree indexes on disk-based tables. But in case of a SQL Server restart, the indexes on the memory-optimized tables are rebuilt as the data is streamed into memory.

14 At the price of MEMORY Allocated
Actual Bucket_Count is rounded up to the next power of 2 In this case 1,024 Create table: DDL Hash Index BUCKET_COUNT 1-2X nr of unique index key values - Oversizing is OK At the price of MEMORY Allocated CREATE TABLE [Customer]( [CustomerID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000), [Name] NVARCHAR(250) COLLATE Latin1_General_100_BIN2 NOT NULL [CustomerSince] DATETIME2 NULL INDEX [ICustomerSince] NONCLUSTERED ) WITH ( MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA ); Collations are specified inline. Not restricted to binary in SQL 2016 This table is memory optimized Indexes are specified inline NONCLUSTERED (range) indexes are supported MEMORY_OPTIMIZED Indicates whether the table is memory optimized. DURABILITY The value of SCHEMA_AND_DATA indicates that the table is a durable, memory-optimized table. DURABILITY=SCHEMA_AND_DATA can be used with MEMORY_OPTIMIZED=OFF. SCHEMA_AND_DATA is the default value for memory-optimized tables. The value of SCHEMA_ONLY indicates that the table is non-durable. The table schema is persisted but any data updates are not persisted upon a restart of the database with Memory-optimized objects. DURABILITY=SCHEMA_ONLY is not allowed with MEMORY_OPTIMIZED=OFF. This table is durable: DURABILITY=SCHEMA_AND_DATA For Non-durable tables: DURABILITY=SCHEMA_ONLY

15 Create Type: DDL CREATE TYPE [Sales].[SalesOrderDetailType_inmem] AS TABLE ( [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [LocalID] [int] NOT NULL, INDEX [IX_ProductID] HASH ([ProductID]) WITH ( BUCKET_COUNT = 8), INDEX [IX_SpecialOfferID] HASH ([SpecialOfferID]) WITH ( BUCKET_COUNT = 8) ) WITH ( MEMORY_OPTIMIZED = ON )

16 There are 3 internal counters to manage timestamp values:
TRANSACTION-ID: reset when instance is restarted. Incremented with every new transaction GLOBAL: not reset on SQL restart; Incremented each time a transaction ends The oldest active running Transaction in the system The Timestamps are 64-bit BIGINTS the actual “ROW” Row format Row Header Payload Begin Ts End Ts StmtID IdsLinkCount Index1 ptr Index2 ptr 8 bytes 8 bytes 4 bytes 2 bytes + 2 for padding 8 bytes * Number of Indexes BEGIN Ts: Uses GLOBAL Timestamp if inserted within a transaction, otherwise it uses TRANSACTION-ID END Ts: Initially when row is inserted it contains a special value ‘INFINITY’. When row is DELETED it uses GLOBAL Timestamp if deleted within a transaction, otherwise it uses TRANSACTION-ID Statement ID: Statement ID that created ROW. If row accessed again by same statement, it is skipped With In-Memory OLTP, the data and index rows have very different storage format as there is no concept of 8K page as it is in SQL Server. However, due to migration reasons, the size of a In-Memory page in memory was maintained as the 8K page that it is in SQL Server. Row Header Every database that supports memory-optimized tables manages two internal counters that are used to generate these timestamps: The Transaction-ID counter is a global, unique value that is reset when the SQL Server instance is restarted. It is incremented every time a new transaction starts. Global Transaction Timestamp is also global and unique, but is not reset on a restart. This value is incremented each time a transaction ends and begins validation processing. The new value is then the timestamp for the current transaction. The Global Transaction Timestamp value is initialized during recovery with the highest transaction timestamp found among the recovered records. The value of Begin-Ts is the timestamp of the transaction that inserted the row, and the End-Ts value is the timestamp for the transaction that deleted the row. A special value (referred to as ‘infinity’) is used as the End-Ts value for rows that have not been deleted. However, when a row is first inserted, before the insert transaction is completed, the transaction’s timestamp is not known so the global Transaction_ID value is used for Begin-Ts until the transaction commits. Similarly, for a delete operation, the transaction timestamp is not known, so the End-Ts value for the deleted rows uses the global Transaction_ID value, which is replaced once the real Transaction Timestamp is known. As we’ll see when discussing data operations, the Begin-Ts and End-Ts values determine which other transactions will be able to see this row. The header also contains a four-byte statement ID value. Every statement within a transaction has a unique StmtId value, and when a row is created it stores the StmtId for the statement that created the row. If the same row is then accessed again by the same statement, it can be skipped. Finally, the header contains a two-byte value (idxLinkCount) which is really a reference count indicating how many indexes reference this row. Following the idxLinkCount value is a set of index pointers, which will be described in the next section. The number of pointers is equal to the number of indexes. The reference value of 1 that a row starts with is needed so the row can be referenced by the garbage collection (GC) mechanism even if the row is no longer connected to any indexes. The GC is considered the ‘owner’ of the initial reference. As mentioned, there is a pointer for each index on the table, and it is these pointers plus the index data structures that connect the rows together. There are no other structures for combining rows into a table other than to link them together with the index pointers. This creates the requirement that all memory-optimized tables must have at least one index on them. Also, since the number of pointers is part of the row structure, and rows are never modified, all indexes must be defined at the time your memory-optimized table is created. Payload The payload is the row itself, containing the key columns plus all the other columns in the row. (So this means that all indexes on a memory-optimized table are actually covering indexes.) The payload format can vary depending on the table. As mentioned earlier in the section on creating tables, the In-Memory OLTP compiler generates the DLLs for table operations, and as long as it knows the payload format used when inserting rows into a table, it can also generate the appropriate commands for all row operations. Begin/End timestamp determines row’s version validity and visibility No concept of data pages, only rows exist 18

17 Row Versions 50, ∞ John Paris 100 100, ∞ John Prague 90, ∞ Susan
Rows for memory-optimized tables are versioned, meaning that each row in the table has potentially multiple versions. Row versioning is used to allow concurrent reads and writes on the same row Hash index on Name Transaction 99: Running compiled query SELECT City WHERE Name = ‘John’ Simple hash lookup returns direct pointer to ‘John’ row 50, ∞ John Paris 100 f(John) 100, ∞ John Prague Background operation will unlink and deallocate the old ‘John’ row after transaction 99 completes. Introduction to Memory-Optimized Tables Transaction 100: UPDATE City = ‘Prague’ where Name = ‘John’ No locks of any kind, no interference with transaction 99 90, ∞ Susan Bogota Timestamps Chain ptrs Name City 19

18 Hash index with (bucket_count=8):
Hash Indexes Hash index with (bucket_count=8): Hash function f: Maps values to buckets Built into the system Hash mapping: f(Jane) 4 5 6 7 1 2 3 Array of 8-byte Memory pointers f(John) f(Prague) Hash Collisions f(Susan) All memory-optimized tables must have at least one index, because it is the indexes that connect the rows together. As mentioned earlier, data rows are not stored on pages, so there is no collection of pages or extents, no partitions or allocation units that can be referenced to get all the pages for a table. There is some concept of index pages for one of the types of indexes, but they are stored differently than indexes for disk-based tables. In-Memory OLTP indexes, and changes made to them during data manipulation, are never written to disk. Only the data rows, and changes to the data, are written to the transaction log. All indexes on memory-optimized tables are created based on the index definitions during database recovery. A hash index consists of an array of pointers, and each element of the array is called a hash bucket, the number of buckets can be specified at index definition time. The hash function is applied to the index key columns and the result of the function determines what bucket that key falls into. Each bucket has a pointer to rows whose hashed key values are mapped to that bucket. The hashing function used for hash indexes has the following characteristics: SQL Server has one hash function that is used for all hash indexes. The hash function is deterministic. The same index key is always mapped to the same bucket in the hash index. Multiple index keys may be mapped to the same hash bucket. The hash function is balanced, meaning that the distribution of index key values over hash buckets typically follows a Poisson distribution. Poisson distribution is not an even distribution. Index key values are not evenly distributed in the hash buckets.. If two index keys are mapped to the same hash bucket, there is a hash collision. A large number of hash collisions can have a performance impact on read operations. Hash indexes are primarily used for point lookups and not for range scans. f(Bogota), f(Beijing)

19 sys.dm_db_xtp_hash_index_stats
Hash Index Traversal Timestamps Chain ptrs Name City Hash index on City Hash index on Name f(Jane) 50, ∞ Jane Prague f(Prague) f(Susan) f(Bogota) The hash function is applied to the index key columns and the result of the function determines what bucket that key falls into. If multiple key values hash to the same value ( a collision), they fall in the same bucket. Each bucket has a pointer to rows whose hashed key values are mapped to that bucket. The bucket count should be set to between one and two times the maximum expected number of distinct values in the index key. However, anything greater than 1/5 and less than 5 times the expected unique key values is usable. For very large tables, a large bucket count for all indexes can consume a significant amount of memory and may not significantly affect performance. The bucket count will internally be rounded up to the next power of two. So, choosing a bucket count of (2^N) + 1 will result in a bucket count of 2^(N + 1). In some cases, this could use a lot of additional extra memory. For the primary key index, the number of distinct values in the key is the same as the cardinality of the table. To achieve the best performance for hash indexes, you must balance the amount of memory allocated to the hash table and the number of distinct values in the index key. There is also a balance between the performance of point lookups and table scans. The higher the bucket_count value, the more empty buckets there will be in the index. This has an impact on memory usage (8 bytes per bucket) and the performance of table scans, as each bucket is scanned as part of a table scan. The lower the bucket count, the more values are assigned to a single bucket. This decreases performance for point lookups and inserts, because SQL Server may need to traverse several values in a single bucket to find the value specified by the search predicate. If the bucket count is significantly (ten times) lower than the number of unique index keys, there will be many buckets that have multiple index keys. This degrades performance of most DML operations, particularly point lookups (lookups of individual index keys). For example, you may see poor performance of SELECT queries and UPDATE and DELETE operations with equality predicates matching the index key columns in the WHERE clause. For more details: 90, ∞ Susan Bogota sys.dm_db_xtp_hash_index_stats © 2014 Microsoft Corporation Microsoft Confidential

20 Hash Index Insert T100: INSERT (John, Prague) Timestamps Chain ptrs
Name City Hash index on City Hash index on Name 50, ∞ Jane Prague f(Prague) f(John) 100, ∞ John Prague When a row is inserted, the hash function is applied to the index key at insert time and it determines what bucket that will reside in. If there is a hash collision due to either a duplicate key value or another key value hashing to the same value, that row is inserted into the chain ( the order is not important and is subject to change) to which the bucket points. As discussed in the row format, there is a Begin Timestamp added to every inserted row which is the timestamp of that corresponding transaction, and the end timestamp is kept at infinity which determines that this is the latest row version. Multiple hash indexes could have pointers to the same rows. 90, ∞ Susan Bogota T100: INSERT (John, Prague) © 2014 Microsoft Corporation Microsoft Confidential

21 Hash Index Delete T150: DELETE (Susan, Bogota) Timestamps Chain ptrs
Name City Hash index on City Hash index on Name 50, ∞ Jane Prague 100, ∞ John Prague Transaction TX1 first locates <Susan, Bogota> via one of the indexes. To delete the row, the end timestamp on the row is set to 150 with an extra flag bit indicating that the value is a transaction ID. Any other transaction that now attempts to access the row finds that the end timestamp contains a transaction ID (150) which indicates that the row may have been deleted. It then locates TX1 in the transaction map and checks if transaction TX1 is still active to determine if the deletion of <Susan, Bogota> has been completed or not. 90, 150 90, ∞ Susan Bogota T150: DELETE (Susan, Bogota) © 2014 Microsoft Corporation Microsoft Confidential

22 Hash Index Update T200: UPDATE (John, Prague) to (John, Beijing)
Timestamps Chain ptrs Name City Hash index on City Hash index on Name 50, ∞ Jane Prague f(John) 100, ∞ John Prague 100, 200 f(Beijing) 200, ∞ John Beijing Transaction T200 first locates <John, Prague> via one of the indexes on Name. To update the row, it is done as a delete followed by an update. So first the row with Timestamp 100 is deleted by marking its end timestamp to 200. Next a new row is created with timestamp 200 which has the value John and the city is updated to Beijing and this is linked to the old row. Now since the updated City is changed from Prague to Beijing, the hash index on the city has to be updated too. In this example Beijing and Bogota hash to the same bucket in the hash table, so via that Beijing bucket, the new row with T200 is linked to the prior row T90,150 that is linked to the Beijing bucket in the hash index. 90, 150 Susan Bogota T200: UPDATE (John, Prague) to (John, Beijing) © 2014 Microsoft Corporation Microsoft Confidential

23 Garbage Collection T250: Garbage collection Timestamps Chain ptrs Name
City Hash index on City Hash index on Name f(Jane) 50, ∞ Jane Prague f(Prague) f(John) 100, 200 John Prague f(Beijing) 200, ∞ John Beijing DELETE and UPDATE operations will generate row versions that will eventually become stale, which means they will no longer be visible to any transaction and these stale versions will slow down scans of index structures and create unused memory that needs to be reclaimed. The Garbage collection process is designed to clean up these stale rows. To determine which rows can be safely deleted, the system keeps track of the timestamp of the oldest active transaction running in the system, and uses this value to determine which rows are still potentially needed. Any rows that are not valid as of this point in time (that is, their end-timestamp is earlier than this time) are considered stale. Stale rows can be removed and their memory can be released back to the system. It is the responsibility of the Garbage Collector to reclaim the memory taken by stale row versions. Memory from the stale rows is reclaimed when a client transaction commits. Following the transaction commit and before releasing the CPU, a block of stale rows in the queue are released, freeing up that memory for re-use. As you use a memory optimized table in production, it is recommended that you monitor its memory usage so you can make adjustments and avoid OOM (Out of Memory) situations. We will cover Garbage collection internals in later lessons. 90, 150 Susan Bogota T250: Garbage collection © 2014 Microsoft Corporation Microsoft Confidential

24 Demo CREATE TABLE DML, Execution Plan
1 [S01 D20B - monitoring Memory-Optimzed memory consumption.sql] 2 [S02 D20 - Accessing MemoryOptimized Objects with Hash Indexes.sql]

25 New in SQL Server 2016: ALTER TABLE

26 Altering Memory-Optimized tables
SQL Server 2016 allows for a Memory-Optimized table to be altered, both schema and Index, by using ALTER TABLE Syntax allows for: Changing bucket count Adding and removing index Changing, adding, and removing column Adding and removing constraint: CHECK DEFAULT ForeignKey Triggers

27 What are the possibilities?
ALTER TABLE allows: Adding and removing index Changing, adding, and removing column Adding and removing constraints: CHECK DEFAULT Foreign Key

28 Demo ALTER TABLE If table was dropped, then recreate it using [S02 D20 - Accessing MemoryOptimized Objects with Hash Indexes.sql] Alter table product: [S02 D50 - ALTER Memory-Optimzed table.sql] Alter table Workload [S02 D52 - ALTER Memory-Optimzed table with dependencies.sql]

29 Transaction Processing
Transactions with Memory-Optimized Tables

30 Concurrency Control Multi-version Optimistic
Multi-version data store Snapshot-based transaction isolation No TempDB Multi-version No locks, no latches, minimal context switches No blocking Conflict detection to ensure isolation No deadlocks Optimistic Unlike disk-based tables, memory-optimized tables allow optimistic concurrency control with the higher isolation levels, REPEATABLE READ and SERIALIZABLE. Locks are not taken to enforce the isolation levels. Instead, at the end of the transaction validation ensures the repeatable read or serializability assumptions. If the assumptions are violated, the transaction is terminated. The important transaction semantics for memory-optimized tables are: Multi-versioning Snapshot-based transaction isolation Optimistic Conflict detection © 2014 Microsoft Corporation Microsoft Confidential

31 Demo Multi-Version Rows
To execute the DEMO use files in 2 vertical tabs: [S04 D20 A - Multiversion Rows - Blocking a row.SQL] and [S04 D20 B - Multiversion Rows - is the row blocked.sql]

32 Compiled Objects

33 Compilation What types of object can be natively compiled?
Tables Types Stored Procedures / Triggers / User-Defined-Functions Tables are recompiled on their creation, and SPs are compiled when they are loaded into DLLs. DLLs are compiled after a Server Restart: Tables are compiled during server restart SPs at the time of fist execution to speed up database recover

34 Native Compilation Native Compilation allows faster data access and more efficient query execution than Interpreted Transact-SQL (also called INTEROP) The process converts T-SQL programming constructs into native code, which consists of processor instructions that won’t require further compilation or interpretation These objects will get auto-recompilation at: SQL Server restarts, on a database failover, or if database is taken offline and back online

35 Natively Compiled SP: Sample code
CREATE PROCEDURE usp_Product_Workload WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE=N'us_english') int = 0 < BEGIN INSERT dbo.Product_Workload ( [Name] , [ProductNumber] , [MakeFlag] , [FinishedGoodsFlag] , [Weight] , [ProductLine] , [Class] , [Style] ) SELECT

36 InMemory DLLs SELECT name, description FROM sys.dm_os_loaded_modules
WHERE description = 'XTP Native DLL'

37 DLLs .

38 Unsupported Natively Compiled SPs / Functions
TechReady13 9/10/2018 Unsupported Natively Compiled SPs / Functions Features Inline Table Variables Cursors Multi-Row INSERT..VALUES CTEs COMPUTE SELECT INTO CASE INSERT EXECUTE READ UNCOMMITTED READ COMMITTED Operators GOTO OFFSET INTERSECT EXCEPT APPLY PIVOT / UNPIVOT Transact-SQL Constructs Not Supported by In-Memory OLTP © 2011 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

39 Supported Natively Compiled SPs / Functions – Query Surface Area
TechReady13 9/10/2018 Supported Natively Compiled SPs / Functions – Query Surface Area SELECT CLAUSE: Columns and name aliases Scalar subqueries TOP SELECT DISTINCT UNION and UNION ALL OUTER JOIN Variable assignments FROM CLAUSE: LEFT OUTER JOIN, RIGHT OUTER JOIN, INNER JOIN SUBQUERIES WHERE CLAUSE: IS [NOT] NULL AND, OR, NOT, IN, EXISTS, BETWEEN INSERT/UPDATE/DELETE Can now include OUTPUT clause GROUP BY: AVG, COUNT, COUNT_BIG, MIN, MAX, SUM ORDER BY CLAUSE: Is supported with GROUP BY HAVING CLAUSE: Same conditions as WHERE clause Supported Features for Natively Compiled T-SQL Modules © 2011 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

40 Security SQL Server prevents tampering with the generated DLLs in three ways: When a table or stored procedure is compiled to a DLL, this DLL is immediately loaded into memory and linked to the sqlserver.exe process. A DLL cannot be modified while it is linked to a process. When a database is restarted, all tables and stored procedures are recompiled based on the database metadata. The generated files are considered part of user data, and have the same security restrictions, via ACLs, as database files: only the SQL Server service account and system administrators can access these files.

41 Natively Compiled SPs – limitations on SQL 2016
For complete list of limitations check:

42 Demo Chatty VS Chunky communication
Run [S05 D021- Executing Natively Compiled SP - Chatty VS Chunky.sql]

43 Cross Feature Support

44 Cross Feature Support Temporal Query Store Row-Level Security Support for MARS TDE Cross-feature support Support for using temporal system-versioning with In-Memory OLTP. For more information, see System-Versioned Temporal Tables with Memory-Optimized Tables Query store support for natively compiled code from In-Memory OLTP workloads. For more information, see Using the Query Store with In-Memory OLTP. Row-Level Security in Memory-Optimized Tables Using Multiple Active Result Sets (MARS) connections can now access memory-optimized tables and natively compiled stored procedures. Transparent Data Encryption (TDE) support. If a database is configured for ENCRYPTION, files in theThe Memory Optimized Filegroup are now also encrypted.

45 Utilities

46 InMemory Advisor Transaction Performance Analysis Report Migration Checklists

47 Transaction Performance Analysis Report
9/10/2018 Transaction Performance Analysis Report From SSMS – Object Explorer Select Reports Standard Reports -> Transaction Performance Analysis Overview © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

48 9/10/2018 Migration Checklists Used to indentify Tables or SPs that are not supported with Memory-Optimized tables or natively compiled SPs Using UI Using Object Explorer Tasks Generate In-Memory OLTP migration Checklists Using PowerShell © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

49 References [PDF]SQL Server 2016 In-memory OLTP
9/10/2018 References [PDF]SQL Server 2016 In-memory OLTP In-Memory OLTP (In-Memory Optimization): In-Memory OLTP Code Samples: load.microsoft.com/download/8/3/6/ A-A27C-4684-BC88-FC7B... Page 5 Native compilation SQL Server 2014 limited the language constructs available in natively compiled procedures to the simplest DML operations needed for basic ... [PDF]SQL Server 2016 In-memory OLTP download.microsoft.com/download/8/3/6/ A-A27C-4684-BC88-FC7B... Page 5 Native compilation SQL Server 2014 limited the language constructs available in natively compiled procedures to the simplest DML operations needed for basic ... © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.


Download ppt "InMemory improvements on SQL Server 2016"

Similar presentations


Ads by Google