Presentation is loading. Please wait.

Presentation is loading. Please wait.

Enhancing Productivity with MySQL 5.6 New Features

Similar presentations


Presentation on theme: "Enhancing Productivity with MySQL 5.6 New Features"— Presentation transcript:

1

2 Enhancing Productivity with MySQL 5.6 New Features
Arnaud ADANT MySQL Principal Support Engineer

3 Safe Harbour Statement
THE FOLLOWING IS INTENDED TO OUTLINE OUR GENERAL PRODUCT DIRECTION. IT IS INTENDED FOR INFORMATION PURPOSES ONLY, AND MAY NOT BE INCORPORATED INTO ANY CONTRACT. IT IS NOT A COMMITMENT TO DELIVER ANY MATERIAL, CODE, OR FUNCTIONALITY, AND SHOULD NOT BE RELIED UPON IN MAKING PURCHASING DECISIONS. THE DEVELOPMENT, RELEASE, AND TIMING OF ANY FEATURES OR FUNCTIONALITY DESCRIBED FOR ORACLE’S PRODUCTS REMAINS AT THE SOLE DISCRETION OF ORACLE.

4 Program Agenda (part 1) Introduction InnoDB Memcached Plugin
InnoDB Full Text Indexes Major Optimizer Enhancements Enhanced Explain and Tracing

5 Program Agenda (part 2) Persistent statistics
Managing tablespaces and partitions Online Schema changes Performance_schema in practice Q & A

6 Introduction : Who I am Arnaud ADANT http://blog.aadant.com
10 year+ Development MySQL Support for 3 years MySQL Performance I love my job ! More support blogs at

7 Introduction : About this tutorial
Productivity oriented Examples are worth a thousand words Web App demo using php The source code will be available on my blog : blog.aadant.com More support blogs at

8 InnoDB Memcached plugin

9 InnoDB Memcached Plugin
About Memcached What is it ? Benefits / Limitations How to configure ? How to use ? Demo

10 About memcached Popular caching framework In-memory caching
Open source Widely used Invented by Brian Aker, Former Director of Architecture at MySQL AB Wikipedia Flickr … Twitter … Youtube

11 Memcached principle Popular caching framework value= get(key)
if undefined value load data set(key, value) return the value Example from memcached.org Cache Results function get_foo(foo_id) foo = memcached_get("foo:" . foo_id) return foo if defined foo foo = fetch_foo_from_database(foo_id) memcached_set("foo:" . foo_id, foo) return foo end set(key,value)

12 InnoDB Memcached plugin : what is it ?
A new way to connect to MySQL ! NoSQL ! In 5.6 GA, production ready Implements the memcached protocol (memcapable) Plugin architecture on top of the InnoDB engine MySQL becomes a key - value store a memcached server Innodb Memcached plugin manual : Note the InnoDB memcached plugin complies with mem See alo MySQL Cluster memcached :

13 InnoDB Memcached plugin : architecture
port 11211 port 3306 listener, parser, optimizer listener cache(s) direct access to InnoDB

14 Memcached to InnoDB translation
InnoDB DML implementation SQL equivalent get (k) a read/fetch command SELECT value FROM t WHERE key = k set (k,v) a search followed by an insertion or update (depending on whether or not a key exists) INSERT INTO t(key,value) VALUES(v,k) ON DUPLICATE KEY UPDATE set key=v; add (k, v) a search followed by an insertion or update INSERT INTO t(key, value) VALUES (v, k) flush_all truncate table TRUNCATE TABLE t

15 Memcached to InnoDB translation
DML Operation get a read/fetch command set a search followed by an insertion or update (depending on whether or not a key exists) add a search followed by an insertion or update replace a search followed by an update append a search followed by an update (appends data to the result before update) prepend a search followed by an update (prepends data to the result before update) incr decr delete a search followed by a deletion flush_all truncate table Table taken from the manual :

16 InnoDB Memcached plugin : namespaces
InnoDB DML implementation Read the value of key in the specified container Select container_name as default container for the Transaction. No value returned get ("key") Read the value of key in the default container. If no default container was defined, the "default" container is used, otherwise the first container.

17 InnoDB Memcached plugin : introduction
Performance Single threaded performance decreases with releases features more complex code overhead Meta Data Locking from 5.5 The memcached plugin by-passes the SQL layer (optimizer, handler interface, …) Multi-threaded performance

18 Single threaded insert performance (SQL)
500k inserts autocommit = 1 innodb_flush_log_at_trx_commit = 2 Benchmark details : drop database if exists test_case; create database test_case; use test_case; set global query_cache_size = 0; set global innodb_flush_log_at_trx_commit = 2; drop table if exists t1; create table t1( id int primary key auto_increment, k varchar(50), value varchar(1000), unique key(k)) engine=InnoDB; =(select now()); --- contains 500k random inserts source /home/aadant/list4.csv; =(select now()); select duration; exit Example for list4.csv : (auto_increment, uuid()) insert into t1(k, value) values('128','35e411aa-1cab-11e3-bba c6'); insert into t1(k, value) values('1641','35e cab-11e3-bba c6'); insert into t1(k, value) values('1836','35e cab-11e3-bba c6'); 18

19 Single threaded insert performance
PERL 100k inserts Commit every 20k 3 x faster with Cache::Memcached::fast 5.6.14 5.7.2 plugin-load=libmemcached.so daemon_memcached_w_batch_size=20000 see : Bug MEMCACHED ADD/SET ARE SLOWER THAN INSERTS is fixed in 19

20 PHP code using memcached (NoSQL)
PECL memcache <?php $cache = new Memcache; $cache->connect('localhost',11211); $cache->flush(0); $handle if ($handle) { while (($buffer = fgets($handle, 4096)) !== false){ list($key,$value) = split("\t",$buffer); $cache->add($key,$value); } fclose($handle); }?> Default values, memcached_w_batch_size is set to 20000

21 PHP code using mysqli (SQL)
$mysqli = new mysqli("localhost", "root", "", "test_case",0,"/tmp/mysql.sock"); $mysqli->query("TRUNCATE TABLE t1"); $mysqli->query("set autocommit = 0"); $handle "r"); if ($handle) { $i=0; while (($buffer = fgets($handle, 4096)) !== false) { list($key,$value) = split("\t",$buffer); $mysqli->query('insert into t1(k,value) values('".$key."','".$value."');"); $i = $i + 1; if ($i % == 0){ $mysqli->commit(); } fclose($handle); $mysqli->close(); ?>

22 InnoDB Memcached plugin : benefits
More productivity for the developers simple API (get, set, delete, flush_all, …) community available new connectors to MySQL (C, C++, python, php, perl, java …) multi-columns support k1|k2|ki v1|v2|vi compatible with MySQL replication high performance from

23 InnoDB Memcached plugin : benefits
Combined with other 5.6 features Flexible persistent key value store Fast warm up Even better with : innodb_buffer_pool_load_at_startup innodb_buffer_pool_dump_at_shutdown innodb_buffer_pool_dump_now innodb_buffer_pool_load_now

24 InnoDB Memcached API : limitations
Do not cover all use cases yet ! The key non-null varchar only The value a char, varchar or blob Windows is not supported the memcached Daemon Plugin is only supported on Linux, Solaris, and OS X platforms. Less secure than SQL, same security as memcached This is the feature request related to non string data types : Bug SUPPORT FOR NON STRING DATA TYPES AS MEMCACHED KEYS AND COLUMNS

25 InnoDB Memcached : how to install ?
Bundled from 5.6 2 dynamic libraries in lib/plugin libmemcached.so : the plugin / listener innodb_memcached.so : the cache Available in community and enterprise builds tar.gz packages have no dependency RPMs need libevent-dev

26 InnoDB Memcached : how to configure ? (1)
Can be configured dynamically Cleaning drop database if exists innodb_memcache; drop table test.demo_test; An initialization script is required (innodb_memcache db) source share/innodb_memcached_config.sql; Uninstall (in case of re-configuration) uninstall plugin daemon_memcached;

27 InnoDB Memcached : how to configure ? (2)
Design the container table use test_case; drop table if exists t1; CREATE TABLE t1 ( k varchar(50), value varchar(1000), c3 int(11), c4 bigint(20) unsigned, c5 int(11), unique KEY k (k) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; memcached protocol mandatory columns flags, cas, exp The 3 memcached protocol columns are currently required.

28 InnoDB Memcached : how to configure ? (3)
Container configuration USE innodb_memcache; TRUNCATE TABLE containers; INSERT INTO containers SET name = 'test', db_schema = 'test_case', db_table = 't1', key_columns = 'k', value_columns = 'value', unique_idx_name_on_key = 'k' flags = 'c3', cas_column = 'c4', expire_time_column = 'c5'; ; namespace key, value, uk/pk Key_columns and value_columns can be composite k1|k2|k3 v1|v2|v3 memcached columns

29 Innodb_memcache.containers
Table structure

30 InnoDB Memcached : how to configure ? (4)
Restart the memcached plugin Dynamic install Configuration file plugin-load=libmemcached.so uninstall plugin daemon_memcached; install plugin daemon_memcached soname "libmemcached.so";

31 InnoDB Memcached : advanced options (1)
Configuration file option Default value Description daemon_memcached_r_batch_size 1 Commit after N reads daemon_memcached_w_batch_size Commit after N writes innodb_api_trx_level 0 = READ UNCOMMITTED Transaction isolation (1, 2, 3, 4 to SERIALIZABLE) innodb_api_enable_mdl 0 = off MDL locking innodb_api_disable_rowlock Row level locking in ON innodb_api_enable_binlog Replication (ROW format) daemon_memcached_option Options passed to the memcached daemon innodb_api_trx_level 0 = READ UNCOMMITTED 1 = READ COMMITTED 2 = REPEATABLE READ 3 = SERIALIZABLE For performance, increase daemon_memcached_w_batch_size (for example to 20000).

32 InnoDB Memcached : advanced options (2)
Security, config options, cache policies SASL security : using daemon_memcached_option=-S Multi-column separator Namespace delimiter : get Memcached expiry off : innodb_only Traditional memcached : cache Memcached expiry on : caching

33 InnoDB Memcached : how to use it?
You can use it with any memcached clients telnet port 11211 libmemcached executables (memcat, memcp, memrm, …) PHP PECL memcache / memcached PERL Cache::Memcached Cache::Memcached::Fast Java C / C++, Python, Ruby, .NET .... See But also SQL from MySQL !

34 InnoDB Memcached : demo (0)
Default blog page = 7k

35 InnoDB Memcached : demo (1)
Adding a persistent memcached to WordPress Apache 2.x + PHP PECL memcached MySQL advanced or community WordPress wordpress-3.4-RC2.zip WordPress Memcached plugin : memcached.2.0.2 On OEL6 : yum install php yum install php-pecl-memcache

36 InnoDB Memcached : demo (2)
Configuring MySQL drop database if exists innodb_memcache; drop table if exists test.demo_test; source share/innodb_memcached_config.sql; drop database if exists test_case; create database test_case; use test_case; drop table if exists t1; CREATE TABLE `t1` ( `k` varchar(50) PRIMARY KEY, `value` BLOB DEFAULT NULL, `c3` int(11) DEFAULT NULL, `c4` bigint(20) unsigned DEFAULT NULL, `c5` int(11) DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; use innodb_memcache; truncate table containers; INSERT INTO containers VALUES ("test", "test_case", "t1","k","value", "c3", "c4","c5", "primary"); Primary key BLOB storage

37 InnoDB Memcached : demo (3)
Configuring MySQL plugin-load=libmemcached.so daemon_memcached_w_batch_size=20000 daemon_memcached_r_batch_size=20000 install the plugin do not activate cd /var/www/html/wordpress/wp-content cp plugins/memcached/object-cache.php . more configuration see plugins/memcached/readme.txt Configuring the WordPres memcached plugin

38 InnoDB Memcached : demo (4)
How to test ? /etc/init.d/httpd restart Using Apache Benchmark ab -n c 10 compare global handlers and temporary tables

39 InnoDB Memcached : demo (5)
Results memcached is not faster than plain SQL on the test machine : memcached : Requests per second: [#/sec] (mean) SQL Requests per second: [#/sec] (mean) however, the number of handlers and temporary tables on disk decreased will use the performance_schema to troubleshoot (P_S in practice) memcached is not faster than plain SQL !. We will use this as example for performance_schema in practice. ab -n c 5 ab -n c 5 with memcached : This is ApacheBench, Version 2.3 <$Revision: $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, Licensed to The Apache Software Foundation, Benchmarking (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Apache/2.2.15 Server Hostname: Server Port: Document Path: /wordpress/ Document Length: bytes Concurrency Level: Time taken for tests: seconds Complete requests: Failed requests: (Connect: 0, Receive: 0, Length: 624, Exceptions: 0) Write errors: Total transferred: bytes HTML transferred: bytes Requests per second: [#/sec] (mean) Time per request: [ms] (mean) Time per request: [ms] (mean, across all concurrent requests) Transfer rate: [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: Processing: Waiting: Total: Percentage of the requests served within a certain time (ms) 50% 66% 75% 80% 90% 95% 98% 99% 100% (longest request) SQL : -bash-4.1$ ab -n c 5 Document Length: bytes Time taken for tests: seconds Failed requests: (Connect: 0, Receive: 0, Length: 859, Exceptions: 0) Total transferred: bytes HTML transferred: bytes Requests per second: [#/sec] (mean) Time per request: [ms] (mean) Time per request: [ms] (mean, across all concurrent requests) Transfer rate: [Kbytes/sec] received Connect: Processing: Waiting: Total: 50% 66% 75% 80% 90% 95% 98% 99% 100% (longest request)

40 InnoDB Memcached : demo (6)
Results ab -n c 10

41 InnoDB Full Text Indexes

42 InnoDB Full-Text Indexes
Introduction Benefits / limitations Indexing / Searching Sorting Advanced configuration Demo

43 Introduction Full text indexes are useful for text processing
Before 5.6, only supported on MyISAM FTS optimizes textual search LIKE %search% requires a full table scan very uneffective on big tables In 5.6, InnoDB full text indexes Same features as MyISAM Drop-in replacement More scalable Less reasons to keep MyISAM

44 InnoDB FTS : implementation (from J. Yang)
Inverted indexes Inverted Indexes Word DOC ID Pos and 2 16 database 7 fulltext 20 mysql search 28 Incoming text strings are tokenized into individual words Words are stored in one or more auxiliary tables. For each word, a list of Document IDs and word position pairs is stored More information here : InnoDB Full Text Index [CON3494] mysql database and fulltext search

45 InnoDB FTS : implementation (from J.Yang)
Auxiliary tables Index tables Config table There two types of Auxiliary tables Table specific auxiliary tables, managing table wise FTS settings config table deleted table Index specific auxiliary tables The actual index tables Multiple Index tables (partitioned) Index a Index b Deleted table More information here : InnoDB Full Text Index [CON3494] InnoDB Table

46 InnoDB FTS : benefits Supersedes MyISAM implementation
MyISAM implementation replacement (not drop-in) Full transactional support InnoDB scalabilty on cores Parallel indexing Better performance than MyISAM Additional features over MyISAM (proximity search) Easier to troubleshoot indexing auxiliary tables On performance see The sorting algorithm is not identical between MyISAM and InnoDB.

47 InnoDB FTS : limitations
Basically the same as the MyISAM implementation No custom tokenization (language dependent). No index based custom sort Not supported for partitioned tables Can slow down the transaction commit (changes in auxiliary tables) Note the the indexing is not done at commit time Increased memory usage Same character set for indexed columns

48 InnoDB FTS : indexing (from J.Yang)
2 ways to do 1. CREATE TABLE tbl_name (ol_name column_definition FULLTEXT [INDEX|KEY] [index_name] (index_col_name,...) ); 2. CREATE FULLTEXT INDEX index_name ON tbl_name (index_col_name,...); ALTER TABLE tbl_name ADD FULLTEXT INDEX index_name (index_col_name,...)

49 InnoDB FTS : searching Natural language search
SELECT * FROM articles WHERE MATCH (title,body)AGAINST ('run mysqld as root' IN NATURAL LANGUAGE MODE); ('dbms stand for' IN NATURAL LANGUAGE MODE WITH QUERY EXPANSION); drop database if exists test_fts; use test_fts; create database test_fts; title VARCHAR(200), id INT UNSIGNED AUTO_INCREMENT NOT NULL PRIMARY KEY, CREATE TABLE articles ( ) ENGINE=InnoDB; FULLTEXT (title,body) body TEXT, ('MySQL Tutorial','DBMS stands for DataBase ...'), INSERT INTO articles (title,body) VALUES ('1001 MySQL Tricks','1. Never run mysqld as root '), ('Optimizing MySQL','In this tutorial we will show ...'), ('How To Use MySQL Well','After you went through a ...'), ('MySQL Security','When configured properly, MySQL ...'); ('MySQL vs. YourSQL','In the following database comparison ...'), AGAINST ('database' IN NATURAL LANGUAGE MODE); WHERE MATCH (title,body) SELECT * FROM articles ('run mysqld as root' IN NATURAL LANGUAGE MODE); WHERE MATCH (title,body)AGAINST ('dbms stand for' IN NATURAL LANGUAGE MODE WITH QUERY EXPANSION); | id | title | body | SELECT * FROM articles WHERE MATCH (title,body)AGAINST ('run mysqld as root' IN NATURAL LANGUAGE MODE); | 4 | 1001 MySQL Tricks | 1. Never run mysqld as root | 1 row in set (0.00 sec) | id | title | body | | 1 | MySQL Tutorial | DBMS stands for DataBase | | 6 | MySQL Security | When configured properly, MySQL | | 5 | MySQL vs. YourSQL | In the following database comparison ... | | 3 | Optimizing MySQL | In this tutorial we will show | | 4 | 1001 MySQL Tricks | 1. Never run mysqld as root | | 2 | How To Use MySQL Well | After you went through a | 6 rows in set (0.01 sec)

50 InnoDB FTS : searching Boolean search
SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('+MySQL -YourSQL' IN BOOLEAN MODE); SELECT * FROM articles WHERE MATCH (title,body) AGAINST('"following IN BOOLEAN MODE); See SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('+MySQL -YourSQL' IN BOOLEAN MODE); SELECT * FROM articles WHERE MATCH (title,body) AGAINST('"following IN BOOLEAN MODE);

51 Boolean operators (from J.Yang)
“+” A leading plus sign indicates that this word must be present in each row that is returned “-” A leading minus sign indicates that this word must not be present in any of the rows that are returned “> <“ These two operators are used to change a word's contribution to the relevance value that is assigned to a row “~” A leading tilde acts as a negation operator, causing the word's contribution to the row's relevance to be negative “*” Wildcard search operator

52 InnoDB FTS : sorting Automatic sorting by relevance desc
rows sorted with the highest relevance first (IDF algorithm) relevance nonnegative floating-point numbers. zero relevance means no similarity. >< operators alter relevance +apple +(>turnover <strudel)' ~ suppress the effect on relevance custom sort via ORDER BY (slower) When MATCH() is used in a WHERE clause, as in the example shown earlier, the rows returned are automatically sorted with the highest relevance first. Relevance values are nonnegative floating-point numbers. Zero relevance means no similarity. Relevance is computed based on the number of words in the row, the number of unique words in that row, the total number of words in the collection, and the number of documents (rows) that contain a particular word.

53 InnoDB FTS : advanced configuration
Relevant options only Configuration file option Default value Description Innodb_optimize_fulltext_only OPTIMIZE TABLE only optimize the full text index innodb_ft_num_word_optimize 20000 Number of words per OPTIMIZE TABLE innodb_ft_min_token_size 3 (MyISAM 4) Maximum token size innodb_ft_aux_table Dynamic variable to access to the index tables via INFORMATION_SCHEMA innodb_ft_sort_pll_degree 2 Number of threads at index creation Innodb_ft_cache_size 32M Holds the tokenized document in memory innodb_sort_buffer_size 1M Used at index creation for sort innodb_ft_enable_stopword 1 Whether using stopwords innodb_ft_*_stopword_table Db_name/table_name of the stop word table Bug EFFECTIVE MEMORY USAGE FOR INNODB-SORT-BUFFER-SIZE AND INNODB-FT-SORT-PLL-DEGREE

54 InnoDB FTS index : maintenance
Use optimize table Updates and deletes feed the « deleted » auxiliary table Physically removed by OPTIMIZE Run OPTIMIZE TABLE after changes Or better: SET innodb_optimize_fulltext_only = 1; SET innodb_ft_num_word_optimize = ; OPTIMIZE TABLE <table_name>;

55 Demo (1) Adding a full text index search to WordPress
Same installation as previously WordPress comes with a SQL search, not full text Create a full text index on wp_posts CREATE FULLTEXT INDEX in_fts ON wp_posts(post_title, post_content); create fulltext index in_fts on wp_posts(post_title, post_content); Edit /var/www/html/wordpress/wp-includes/query.php /*foreach( (array) $q['search_terms'] as $term ) { $term = esc_sql( like_escape( $term ) ); $search .= "{$searchand}(($wpdb->posts.post_title LIKE '{$n}{$term}{$n}') OR ($wpdb->posts.post_content LIKE '{$n}{$term}{$n}'))"; $searchand = ' AND '; }*/ $search .= "{$searchand}(( MATCH($wpdb->posts.post_title,$wpdb->posts.post_content) against('".$q['s']."') ))"; Sorting can be configured in the same way

56 Demo (2) Edit /var/www/html/wordpress/wp-includes/query.ph
/*foreach( (array) $q['search_terms'] as $term ) { $term = esc_sql( like_escape( $term ) ); $search .= "{$searchand}(($wpdb->posts.post_title LIKE '{$n}{$term}{$n}') OR ($wpdb->posts.post_content LIKE '{$n}{$term}{$n}'))"; $searchand = ' AND '; }*/ $search .= "{$searchand}(( MATCH($wpdb->posts.post_title,$wpdb- >posts.post_content) against('".$q['s']."') ))";

57 Major optimizer enhancements

58 Major Optimizer enhancements
Index Condition Pushdown (ICP) Multi-Range Reads (MRR) Batched Key Access (BKA) Subqueries Order by limit optimization Varchar maximum length InnoDB extended secondary keys

59 Optimizer switches (1) 5.6 mysql> select index_merge=on, index_merge_union=on, index_merge_sort_union=on, index_merge_intersection=on, engine_condition_pushdown=on, index_condition_pushdown=on, mrr=on, mrr_cost_based=on, block_nested_loop=on, batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on ICP MRR

60 Optimizer switches (2) 5.6 block_nested_loop=on, batched_key_access=off, materialization=on, semijoin=on,loosescan=on,firstmatch=on, subquery_materialization_cost_based=on, use_index_extensions=on BNL, BKA Subqueries Index extension using InnoDB PK in sec indexes

61 Index Condition Pushdown (from O. Grovlen)
Pushes conditions that can be evaluated on the index down to storage engine Works only on indexed columns Goal: evaluate conditions without having to access the actual record Reduces number of disk/block accesses Reduces CPU usage Both conditions on a single index and conditions on earlier tables in a JOIN can be pushed down MySQL server Query conditions Index These slides comes from Oystein Grovlen’s presentation : How to Analyze and Tune SQL Queries for Better Performance, MySQL tutorial Table data Storage engine

62 Index Condition Pushdown (from O. Grovlen)
How it works Without ICP: Storage Engine: 1. Reads index 2. Reads record 3. Returns record Server: 4. Evaluates condition Execution Index Table data 2. 1. 3. 4. Storage engine MySQL server Optimizer With ICP: Storage Engine: 1. Reads index and evaluates pushed index condition 2. Reads record 3. Returns record Server: 4. Evaluates rest of condition This slide comes from Oystein Grovlen’s presentation : How to Analyze and Tune SQL Queries for Better Performance, MySQL tutorial

63 Index Condition Pushdown Example
CREATE TABLE person ( name VARCHAR(..), height INTEGER, postalcode INTEGER, age INTEGER, INDEX (postalcode, age) ); SELECT name FROM person WHERE postalcode BETWEEN AND AND age BETWEEN 21 AND 22 AND height>180; Pushed to storage engine Evaluated on index entries Evaluated by server

64 Index Condition Pushdown Demo (1)
Index on (postalcode, age) drop database if exists test_icp; CREATE TABLE person ( use test_icp; create database test_icp; name VARCHAR(50), age INTEGER, postalcode INTEGER, height INTEGER, ) ENGINE=InnoDB; INDEX (postalcode , age) insert into person(name, height, postalcode, age) values( floor(rand()*100000), greatest(floor(rand()*235),100), floor(rand()*95000), floor(rand()*125)); insert into person(name, height, postalcode, age) values( floor(rand()*100000), greatest(floor(rand()*235),100),floor(rand()*95000), floor(rand()*125)); select floor(rand()*100000), greatest(floor(rand()*235),100), floor(rand()*95000), floor(rand()*125) from replace into person(name, height, postalcode, age) person p11, person p12, person p13, person p14, person p15, person p16, person p17, person p18; person p1, person p2, person p4, person p5, person p6, person p7, person p8, person p9, person p10, SELECT count(*) FROM person; SELECT count(*) FROM person WHERE postalcode BETWEEN AND 95000; SELECT count(*) FROM person WHERE postalcode BETWEEN AND AND age BETWEEN 21 AND 22; SET SESSION optimizer_switch = 'index_condition_pushdown=on'; FLUSH STATUS; SELECT person FROM name WHERE AND height > 180; AND age BETWEEN 21 AND 22 postalcode BETWEEN AND 95000 EXPLAIN SHOW SESSION STATUS LIKE 'Handler_read%'; SET SESSION optimizer_switch = 'index_condition_pushdown=off'; 131074 6911 97

65 Index Condition Pushdown Demo (2)
Testing ICP on / off 5.6 5.5 like

66 Index Condition Pushdown Demo (3)
Results for 5.6 Using index condition, 97 rows examined in the index

67 Index Condition Pushdown Demo (4)
Results for ICP = off, 5.5 like Using where, rows examined

68 Multi-Range Reads IO bound load only Used for MySQL Cluster as well
MySQL Server uses DS-MRR (disk sweep MRR) Only useful for IO bound loads index ranges convert random key access to sequential access using read_rnd_buffer_size Heuristics : table size larger than innodb_buffer_pool_size mysql> set session optimizer_switch = 'mrr=on,mrr_cost_based=off,index_condition_pushdown=off'; Query OK, 0 rows affected (0.00 sec) mysql> explain SELECT * FROM person WHERE postalcode >= AND postalcode < AND age = 36;

69 MySQL 5.6: Data Access with DS-MRR
InnoDB Example Collect PKs in buffer Sweep-read rows Index Table Sort Index scan 0:50/4:05 With DS-MRR we will sort the references found in the index in primary key order before we access the table. Hence, the table access will be sequential. Note that if the buffer is not large enough to hold all the keys, there will be multiple sequential scans. The buffer size is configurable PKs in index order PKs in PK order

70 Multi-Range Reads Forcing MRR
set session optimizer_switch = 'mrr=on,mrr_cost_based=off,index_condition_pushdown=off'; MRR can lead to performance regressions with LIMIT

71 MySQL 5.5 vs MySQL 5.6: DBT-3 Queries using DS-MRR
DBT-3, Scale 10 (23 GB) innodb_buffer_pool_size= 1 GB (disk-bound) read_rnd_buffer_size = 4 MB 1:00/5:05 Here we see the benefits of using DS-MRR on queries from the DBT-3 benchmark. This is run on a 23 GB database with a 1 GB buffer pool. Hence, the performance is disk-bound. There are 5 queries in DBT-3 for which DS-MRR will be used, and we see that we get from 91-98% reduction in query time. The read_rnd_buffer_size determines the size of the buffer we talked about. You can configure this size per session.

72 Batched Key Access IO bound load only
Basically MRR for joins with an index Using the join_buffer_size Only useful for IO bound loads Sorts the rows in the join_buffer MRR interface to storage engine

73 MySQL 5.6: Batched Key Access (BKA)
DS-MRR Applied to Join Buffering Collect PKs in buffer Table1 Sweep-read rows Table2 Index Sort 1:25/10:25 The other optimization for disk-bound queries is Batched Key Access. It may be used when we are using an index to do joins. The join buffer is filled with matching rows from the first table. When join buffer is full, one will find the matching index entries and add the PKs to a buffer that is sorted before accessing the second table of the join. We see that this is really DS-MRR applied to join buffering. We recognize the right-hand part of the figure from an earlier slide. Join buffer PKs in join buffer order PKs in PK order

74 MySQL 5.5 vs MySQL 5.6: Queries using BKA
DBT-3, Scale 10 (23 GB) innodb_buffer_pool_size= 1 GB (disk-bound) join_buffer_size = 4 MB optimizer_switch = ’batched_key_access=on, mrr_cost_based=off’ 01:40/12:05 So here we compare the performance of 5.5 and 5.6 for the DBT-3 queries that may will use BKA. Note that BKA is not on by default , so we need to the optimizer_switch for it to be used. Also, the cost based selection for MRR is currently not working well, so we need to that off. We see that for most queries we get a significant improvement with BKA, but there is a few queries where it is not beneficial. For example, it has a really bad impact on Q17: For this query, one of the tables are used twice, both in the main join and in a subquery. Without BKA, the same row will be needed at about the same time in both contexts, and the buffering from BKA breaks this locality. Hence, you will have to read the page from disk again when you need the row for the second time.

75 Batched Key Access Forcing BKA
SET optimizer_switch='mrr=on,mrr_cost_based=off, batched_key_access=on'; SELECT * FROM city c join person p WHERE p.postalcode >= AND p.postalcode < and c.postalcode = p.postalcode Extra: Using join buffer (Batched Key Access) create table city (postalcode INTEGER, city_name varchar(50), primary key(postalcode) ) ENGINE=InnoDB; insert into city(postalcode, city_name) values(floor(rand()*95000), floor(rand()*95000)); replace into city(postalcode, city_name) select floor(rand()*95000), floor(rand()*95000) from city c1, city c2, city c3, city c4, city c5, city c6, city c7, city c8, city c9, city c10, city c11; mysql> SET optimizer_switch='mrr=on,mrr_cost_based=off,batched_key_access=on'; Query OK, 0 rows affected (0.00 sec) mysql> explain SELECT * FROM city c join person p WHERE p.postalcode >= AND p.postalcode < and c.postalcode = p.postalcode\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: c type: range possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: NULL rows: 107 Extra: Using where *************************** 2. row *************************** table: p type: ref possible_keys: postalcode key: postalcode key_len: 5 ref: test_icp.c.postalcode rows: 1 Extra: Using join buffer (Batched Key Access) 2 rows in set (0.00 sec)

76 Subqueries A major improvement !
IN queries were a big issue in MySQL before 5.6 A major cause of query rewrite SELECT * FROM t1 WHERE t1.a IN (SELECT t2.b FROM t2 WHERE where_condition) From 5.6, the optimizer can decide to use a : Semi-join (duplicate removal) Materialization EXISTS strategy 5.6 optimizes subqueries in the FROM Clause (Derived Tables) Materialization : Subquery / Dependent subquery Semi-join : First match, Loose Scan, Materialization (MatLookup, MatScan), Duplicate Weedout, Table pull out (join with PK) See Note that optimizing subqueries also has an impact on EXPLAIN plan execution time. Optimizing Subqueries in the FROM Clause (Derived Tables) This problem that existed in 5.5 is solved in 5.6. DROP DATABASE IF EXISTS tmp_tables; CREATE DATABASE tmp_tables; USE tmp_tables; CREATE TABLE t ( id bigint(10) AUTO_INCREMENT, text_column text, varchar_column varchar(16000), PRIMARY KEY(id) ) ENGINE = InnoDB CHARSET=latin1; INSERT INTO t(varchar_column) VALUES (NULL); INSERT INTO t(varchar_column) VALUES (NULL); -- 1M rows, 40M table REPLACE INTO t(varchar_column) SELECT NULL FROM t t1, t t2, t t3, t t4, t t5, t t6, t t7, t t8, t t9, t t10, t t11, t t12, t t13, t t14, t t15, t t16, t t17, t t18, t t19, t t20; -- fast, does not use a temporary table SELECT count(*) FROM t; -- a bit longer but still fast, using a temporary table SELECT count(*) FROM (SELECT id FROM t) z; ,9 G on disk, much longer : 7 min SELECT count(*) FROM (SELECT * FROM t) z; SELECT count(*)FROM (SELECT * FROM t) z;

77 Order by limit optimization
5.6 can use an in-memory priority queue SELECT col1, ... FROM t1 ... ORDER BY name LIMIT 10; 2 possibilities in 5.6 : Use filesort (IO on disk, CPU) Use a priority queue in the sort_buffer_size that compares the row to the « last »value in the queue. Sort the queue if the row is in the queue. The optimizer chooses the best option Works from SQL_CALC_FOUND_ROWS

78 Varchar maximum length
Using variable length MyISAM temporary tables CREATE TABLE t1 ( id int, col1 varchar(10), col2 varchar(2048) ); /* insert 4M rows */ SELECT col1 FROM t1 GROUP BY col2, id LIMIT 1; See also : Bug THE OPTIMIZER STILL USES FIXED LENGTH TEMPORARY TABLES ON DISK

79 InnoDB index extension
The optimizer now considers the PK in secondary indexes SET optimizer_switch='use_index_extensions=on'; (default) CREATE TABLE t1 ( i1 INT NOT NULL DEFAULT 0, i2 INT NOT NULL DEFAULT 0, d DATE DEFAULT NULL, PRIMARY KEY (i1, i2), INDEX k_d (d)  contains (d, i1, i2) ) ENGINE = InnoDB; See CREATE TABLE t1 ( i1 INT NOT NULL DEFAULT 0, i2 INT NOT NULL DEFAULT 0, d DATE DEFAULT NULL, PRIMARY KEY (i1, i2), INDEX k_d (d) ) ENGINE = InnoDB; INSERT INTO t1 VALUES (1, 1, ' '), (1, 2, ' '), (1, 3, ' '), (1, 4, ' '), (1, 5, ' '), (2, 1, ' '), (2, 2, ' '), (2, 3, ' '), (2, 4, ' '), (2, 5, ' '), (3, 1, ' '), (3, 2, ' '), (3, 3, ' '), (3, 4, ' '), (3, 5, ' '), (4, 1, ' '), (4, 2, ' '), (4, 3, ' '), (4, 4, ' '), (4, 5, ' '), (5, 1, ' '), (5, 2, ' '), (5, 3, ' '), (5, 4, ' '), (5, 5, ' '); mysql> SET optimizer_switch='use_index_extensions=on'; Query OK, 0 rows affected (0.00 sec) mysql> explain select * from t1 where d = curdate() and i1 = 2; | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | | 1 | SIMPLE | t1 | ref | PRIMARY,k_d | k_d | | const,const | 1 | Using index | 1 row in set (0.00 sec) mysql> SET optimizer_switch='use_index_extensions=off'; | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | | 1 | SIMPLE | t1 | ref | PRIMARY,k_d | k_d | | const | 1 | Using where; Using index |

80 Enhanced explain and tracing
This slide comes from Oystein Grovlen’s presentation : How to Analyze and Tune SQL Queries for Better Performance, MySQL tutorial

81 Enhanced explain and tracing
Explain for update, insert, delete statements Explain format = JSON Optimizer tracing

82 EXPLAIN for update, insert, delete
Was a long awaited feature request ! Before MySQL 5.6, EXPLAIN Only available for SELECT Rewrite was needed SHOW STATUS LIKE 'Handler%‘ From 5.6, available for : INSERT UPDATE DELETE REPLACE

83 Explain for data modifiers
Example EXPLAIN DELETE FROM person WHERE postalcode '10000‘\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: person type: range possible_keys: postalcode key: postalcode key_len: 5 ref: const rows: 1 Extra: Using where

84 EXPLAIN format = JSON Provides more information in a developer friendly format EXPLAIN has a lot of variations EXPLAIN EXTENDED; SHOW WARNINGS\G EXPLAIN PARTITIONS EXPLAIN format tabular EXPLAIN format=JSON Has it all In structured format Extensible format Verbose

85 EXPLAIN format = JSON explain format=JSON delete from person where postalcode = '10000'\G EXPLAIN: { "query_block": { "select_id": 1, "table": { "delete": true, "table_name": "person", "access_type": "range", "possible_keys": [ "postalcode" ], "key": "postalcode", "used_key_parts": [ "postalcode"], "key_length": "5", "ref": [ "const" ], "rows": 1, "filtered": 100, "attached_condition": “ (`test_icp`.`person`.`postalcode` = '10000')" }

86 Visual explain in MySQL WorkBench
The JSON format can be displayed by GUI tools

87 Optimizer traces New in 5.6, useful for support and troubleshooting
Typical usage : # Turn tracing on (it's off by default): SET optimizer_trace="enabled=on,end_marker=on"; SET optimizer_trace_max_mem_size= ; SELECT ...; # your query here SELECT * FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE; # possibly more queries... # When done with tracing, disable it: SET optimizer_trace="enabled=off";

88 Optimizer traces What is traced ?
Contains more information on the optimizer decision Execution plans at runtime ! Select, update, delete, replace CALL (stored procedure), SET using selects The replication SQL thread is not traceable 20 % overhead SELECT TRACE INTO DUMPFILE <filename> FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE;

89 Optimizer trace : demo (1)
Creating a stored procedure use test_icp; drop procedure if exists test_proc; delimiter // create procedure test_proc() begin = ""; select 'select count(*) from person' prepare stmt execute stmt; end // delimiter ; use test_icp; drop procedure if exists test_proc; delimiter // create procedure test_proc() begin = ""; select 'select count(*) from person' prepare stmt execute stmt; end // delimiter ; SET OPTIMIZER_TRACE="enabled=on",END_MARKERS_IN_JSON=on; set optimizer_trace_limit = 100; set optimizer_trace_offset=-100; SET optimizer_trace_max_mem_size= ; SELECT * FROM city c join person p WHERE p.postalcode >= AND p.postalcode < and c.postalcode = p.postalcode; call test_proc(); SELECT * FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE; # possibly more queries... # When done with tracing, disable it: SET optimizer_trace="enabled=off";

90 Optimizer trace : demo (2)
Test script SET OPTIMIZER_TRACE="enabled=on",END_MARKERS_IN_JSON=on; set optimizer_trace_limit = 100; set optimizer_trace_offset=-100; SET optimizer_trace_max_mem_size= ; SELECT * FROM city c join person p WHERE p.postalcode >= AND p.postalcode < and c.postalcode = p.postalcode; call test_proc(); SELECT * FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE; SET optimizer_trace="enabled=off";

91 Optimizer trace : demo (3)
Results

92 Optimizer trace : demo (4)
"considered_execution_plans": [ { "plan_prefix": [ ] /* plan_prefix */, "table": "`person`", "best_access_path": { "considered_access_paths": [ "access_type": "scan", "rows": , "cost": 26680, "chosen": true } ] /* considered_access_paths */ } /* best_access_path */, "cost_for_plan": 26680, "rows_for_plan": ,

93 Persistent statistics

94 Persistent statistics
Why ? What’s new in 5.6 ? Where are the statistics stored ? Demo

95 Persistent statistics : why ?
A major improvement for plan stability InnoDB statistics are based on random dives to estimate index cardinalities Before 5.6, index and table statistics are : calculated at runtime transient Bad for execution plans different EXPLAIN different EXPLAIN on the master and slaves

96 Cardinality An estimate of the number of unique values in the index.(…)The higher the cardinality, the greater the chance that MySQL uses the index when doing joins. MySQL manual SHOW INDEX Syntax

97 Cardinality estimate on a big InnoDB table
594 should be 100 random

98 Persistent statistics : example
Big tables with close cardinalities 2 identical tables can produce different values due to random dives : create table t2 like t1; insert into t2 select * from t1 order by id; analyze table t2; analyze table t1; show indexes from t2; show indexes from t1; use test; set global innodb_stats_persistent = 0; drop database test_persistent; create database test_persistent; use test_persistent; drop table if exists t1; CREATE TABLE t1 ( id int auto_increment NOT NULL, c1 int NOT NULL , c2 tinyint NOT NULL, PRIMARY KEY(id), KEY c1 (c1), KEY c2 (c2) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; truncate table t1; insert into t1(c1, c2) values(floor(rand()*100), floor(rand()*100)); replace into t1(c1, c2) select floor(rand()*100), floor(rand()*100) from t1 t1, t1 t2, t1 t3, t1 t4, t1 t5, t1 t6, t1 t7, t1 t8, t1 t9, t1 t10, t1 t11, t1 t12, t1 t13, t1 t14, t1 t15, t1 t16, t1 t17, t1 t18, t1 t19, t1 t20; INSERT INTO t1(c1, c2) select 12, 13 from t1 limit ; select count(*), count(distinct c1), count(distinct c2) from t1; drop table if exists t2; create table t2 like t1; insert into t2 select * from t1 order by id; show indexes from t2; show create table t2; analyze table t2; analyze table t1; show indexes from t1;

99 Same data Different cardinalities ! 594 18

100 What’s new in 5.6 ? Persistent statistics and more
innodb_stats_persistent = ON per default innodb_stats_auto_recalc = ON Statistics are collected using ANALYZE TABLE or if the rows changed by more than 10%

101 What’s new in 5.6 ? Persisted statistics and more
innodb_stats_on_metadata = OFF per default (dynamic variable) innodb_stats_method = nulls_equal innodb_stats_method impacts cardinality calculation when there are NULL values

102 What’s new in 5.6 ? Persisted statistics and more
innodb_persistent_sample_pages = 20 innodb_transient_sample_pages = 8 = innodb_sample_pages (deprecated) Used by the statistics estimation algorithm during ANALYZE TABLE Or auto-recalc

103 What’s new in 5.6 ? Persisted statistics options at the table level
CREATE / ALTER table option Possible values Description STATS_PERSISTENT ON, OFF, DEFAULT See Innodb_stats_persistent STATS_AUTO_RECALC See innodb_stats_auto_recalc STATS_SAMPLE_PAGES 20 See Innodb_persistent_sample_pages_

104 Where are the statistics stored ?
2 new InnoDB tables in the mysql database

105 Where are the statistics stored ?
2 new InnoDB tables in the mysql database These tables are replicated by default : it solves the problematic master / slave execution plan differences before 5.6

106 Demo Testing auto-recalc use test; drop database test_persistent;
drop table if exists t1; use test_persistent; create database test_persistent; CREATE TABLE t1 ( c2 tinyint NOT NULL, c1 int NOT NULL , id int auto_increment NOT NULL, PRIMARY KEY(id), ) ENGINE=InnoDB DEFAULT CHARSET=latin1; KEY c2 (c2) KEY c1 (c1), insert into t1(c1, c2) values(floor(rand()*100), floor(rand()*100)); truncate table t1; replace into t1(c1, c2) select floor(rand()*100), floor(rand()*100) t1 t11, t1 t12, t1 t13, t1 t14, t1 t15, t1 t16, t1 t17, t1 t18, t1 t19, t1 t20; from t1 t1, t1 t2, t1 t3, t1 t4, t1 t5, t1 t6, t1 t7, t1 t8, t1 t9, t1 t10, INSERT INTO t1(c1, c2) select 12, 13 from t1 limit ; select count(*), count(distinct c1), count(distinct c2) from t1; insert into t2 select * from t1 order by id; create table t2 like t1; drop table if exists t2; analyze table t2; show create table t2; show indexes from t2; show indexes from t1; analyze table t1; select * from mysql.innodb_index_stats where database_name='test_persistent'; On the slave : change master to master_host=' ', master_port=5614, master_log_file='pan14-bin ', master_log_pos=120, master_user='root', master_password=''; start slave; show slave status\G select * from mysql.innodb_index_stats where database_name='test_persistent' and table_name='t2'; --testing auto-recalc delete from t2 limit ;

107 Demo Testing replication (effect of ANALYZE), binlog_format = MIXED
select * from mysql.innodb_index_stats where database_name='test_persistent' and table_name='t1’ and index_name=‘c2’; Analyze table t1; -- master select * from mysql.innodb_index_stats where database_name='test_persistent' and table_name='t1’ and index_name=‘c2”; -- slave

108 Demo Testing replication (table creation), binlog_format = MIXED
index_stats contains several rows per index, and the column stat_name identifies the type of statistics as described by the content of the stat_description column. The n_diff_pfx_ rows contain the cardinality statistics: For rows with stat_name = 'n_diff_pfx_01', stat_value contains the number of distinct values for the first column of the given index. For rows with stat_name = 'n_diff_pfx_02', stat_value contains the number of unique combinations of the two first columns of the given index, and so on. (Note that InnoDB indexes, in addition to the columns that were specified when the index was created, contain all columns of the primary key.) Build the test case select * from mysql.innodb_index_stats where database_name='test_persistent' and table_name='t1’ and index_name=‘c1’; -- master --testing replication select * from mysql.innodb_index_stats where database_name='test_persistent' and table_name='t1’ and index_name = ‘c1”; -- slave

109 Transportable tablespaces and partitions

110 Managing tablespaces and partitions
Transportable table spaces Managing partitions Tablespace location Separate undo logs Demo

111 Transportable tablespaces
How is an InnoDB table stored ? 3 parts : The FRM file The IBD file containing the data several IBD if partitioned An entry in the internal InnoDB data dictionary Located in the main InnoDB tablespace (ibdata1)

112 Transportable tablespaces
Copying table spaces? Pre-requisite : innodb_file_per_table = 1, (.ibd file). Faster copy than mysqldump Data recovery Backup (MEB can backup one table) Building a test environment

113 Transportable tablespaces
Copying datafiles (ibd) from a server to another Before 5.6, It was possible to restore a data file from the same MySQL server Harder if the InnoDB dictionary part was missing (missing table ID) From 5.6, it is possible provided that : You are using the same MySQL series : 5.6 => 5.6 Preferably SAME version Mixing series can cause serious crashes : CRASH WHEN IMPORTING A 5.5 TABLESPACE TO 5.6 It is only safe with additional metadata available at export (.cfg file) see Bug CRASH WHEN IMPORTING A 5.5 TABLESPACE TO 5.6 See Bug ALTER TABLE SHOULD NOT ALLOW CREATION OF TABLES WITH BOTH 5.5 AND 5.6 TEMPORALS

114 Transportable tablespaces
How it works source destination FLUSH TABLES t FOR EXPORT; CREATE TABLE t(c1 INT) engine=InnoDB; ALTER TABLE t DISCARD TABLESPACE; Database directory t.frm, t.ibd, t.cfg source : CREATE TABLE t(c1 INT) engine=InnoDB; use test_transportable; create database test_transportable; insert into t(c1) values(1),(2),(3); destination : ALTER TABLE t DISCARD TABLESPACE; FLUSH TABLES t FOR EXPORT; ls -al /export/home/aadant/mysql-advanced linux-glibc2.5-x86_64/data//test_transportable drwxr-xr-x 14 aadant common Sep 19 20:14 .. drwx aadant common Sep 19 20:19 . total 124 -rw-rw aadant common Sep 19 20:19 t.cfg <=== -rw-rw aadant common Sep 19 20:14 db.opt -rw-rw aadant common Sep 19 20:14 t.ibd -rw-rw aadant common Sep 19 20:14 t.frm scp /innodb_data_dir/test/t.{ibd,cfg} destination-server:/innodb_data_dir/test_transportable cp /export/home/aadant/mysql-advanced linux-glibc2.5-x86_64/data//test_transportable/t.* /export/home/aadant/mysql-advanced linux-glibc2.5-x86_64/data//test_transportable/ in my case, same server : UNLOCK TABLES; destination ALTER TABLE t IMPORT TABLESPACE; Query OK, 0 rows affected (0.31 sec) mysql> ALTER TABLE t IMPORT TABLESPACE; mysql> select * from t; | c1 | | 2 | | 1 | | 3 | 3 rows in set (0.00 sec) ALTER TABLE t IMPORT TABLESPACE UNLOCK TABLES; SELECT * from t

115 Managing partitions Selecting and exchanging partitions
SELECT * FROM employees PARTITION (p0, p2) WHERE lname LIKE 'S%'; ALTER TABLE e EXCHANGE PARTITION p0 WITH TABLE e2; p0 e2 data is checked against the p0 partition definition e2 e p1

116 Tablespace location More flexibility
Before 5.6, one datadir for all ibd files. Symlinks only solution problematic on ALTER TABLE not supported for InnoDB From 5.6 : CREATE TABLE external (x int UNSIGNED NOT NULL PRIMARY KEY) DATA DIRECTORY = '/volumes/external1/data';

117 Separate undo logs Smaller ibdata1 !
Undo logs store uncommitted data during a transaction Their size is not limited Stored in the ibdata1 2 variables : innodb_undo_tablespaces : the number of undo logs [0 – 126] Innodb_undo_directory : undo log directory (better on fast storage) Cannot shrink for now Cannot be dropped

118 Online Schema changes

119 Online Schema changes (Online DDL)
Introduction Demo

120 Online DDL : introduction
A most wanted 5.6 feature ! DDL = Data Definition Language, mostly ALTER TABLE Before 5.6, ALTER TABLE were blocking (exclusive lock) Online DDL is crucial for availability, developer agility scripted solutions : trigger based before 5.6 pt-online-schema-change Online Schema Facebook MySQL 5.6 introduces Online DDL for most ALTER not all For completeness, secondary index creation and secondary index drop did not require a full table rebuild, only to rebuild the indexes but the operations was blocking.

121 Online DDL : ALTER options
2 main parameters were added to ALTER TABLE ALGORITHM [=] {DEFAULT|INPLACE|COPY} DEFAULT = defined by the operation either : INPLACE or COPY LOCK [=] {DEFAULT|NONE|SHARED|EXCLUSIVE} NONE : concurrent DML allowed SHARE : concurrent queries EXCLUSIVE : all statements are waiting

122 Online DDL : definitions
Some definitions are needed before we go on DML = Data Manipulation Language Insert Update Delete Replace Select are not part of the DML (standard) ! Concurrent queries = select

123 Type of Online Operations
Credits : Calvin Sun Type of Online Operations Metadata only MySQL Server metadata, such as alter column default MySQL Server metadata & InnoDB metadata, such as add/drop foreign key Metadata plus w/o rebuilding the table, such as add/drop index Metadata plus rebuilding the table, such as add primary index, add column. It is best to define the primary key when you create a table, so you need not rebuild the table later.

124 How Does It Work - Online Add Index
Credits : Calvin Sun How Does It Work - Online Add Index CREATE INDEX index_name ON table name (c1) Pre- Prepare Phase Prepare Phase Build Phase Final Phase Concurrent User Source (table) Scan clustered index; Extract index entries; Sort / merge index build Drop old table (if primary) No concurrent DML allowed Upgradable Shared Metadata Lock DML Logging; Apply logs at the end of create index Create temp table for new index (if primary) Upgradable Shared Metadata Lock Metadata Lock Concurrent Select, Delete, Insert, Update (cluster) Index Create log files; Logging starts Update system tables (metadata) Exclusive Metadata Lock Check whether the online DDL is supported You can create multiple indexes on a table with one ALTER TABLE statement. This is relatively efficient, because the clustered index of the table needs to be scanned only once (although the data is sorted separately for each new index). When you create a UNIQUE or PRIMARY KEY index, InnoDB must do some extra work. For UNIQUE indexes, InnoDB checks that the table contains no duplicate values for the key. For a PRIMARY KEY index, InnoDB also checks that none of the PRIMARY KEY columns contains a NULL. It is best to define the primary key when you create a table, so you need not rebuild the table later.

125 Considerations for Online Operations (1)
3/25/2017 8:54 AM Credits : Calvin Sun Considerations for Online Operations (1) Plan disk space requirements for online operations Temp tables (if need rebuild) DML log files Redo/undo logs Parameters innodb_sort_buffer_size innodb_online_alter_log_max_size © 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

126 Considerations for Online Operations (2)
3/25/2017 8:54 AM Credits : Calvin Sun Considerations for Online Operations (2) Performance considerations Online operations may be slower than offline operations Understand the impact on DML operations Other considerations Foreign keys Only support online add foreign key constraint when check_foreigns is turned off. DROP PRIMARY KEY is only allowed in combination with ADD PRIMARY KEY. © 2007 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

127 Example 1: Add / Drop Index
Credits : Calvin Sun Example 1: Add / Drop Index mysql: set old_alter_table=0; Query OK, 0 rows affected (0.00 sec) mysql: create index i_dtyp_big on big_table (data_type); Query OK, 0 rows affected (37.93 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql: drop index i_dtyp_big on big_table; Query OK, 0 rows affected (0.16 sec) mysql: set old_alter_table=1; Query OK, 0 rows affected (0.00 sec) mysql: create index i_dtyp_big on big_table (data_type); Query OK, rows affected (4 min sec) Records: Duplicates: 0 Warnings: 0 mysql: drop index i_dtyp_big on big_table; Query OK, rows affected (3 min sec) Not just faster; also allowing DML during the build The table has 21 columns, 1.7m rows, > 300MB

128 Example 2: Rename Column
Credits : Calvin Sun Example 2: Rename Column mysql: set old_alter_table=0; Query OK, 0 rows affected (0.00 sec) mysql: alter table big_table change `flags` `new_flags` -> varchar(3) character set utf8 not null; Query OK, 0 rows affected (0.08 sec) Records: 0 Duplicates: 0 Warnings: 0 mysql: set old_alter_table=1; mysql: alter table big_table change `new_flags` `flags` Query OK, rows affected (3 min sec) Records: Duplicates: 0 Warnings: 0 Type info is required

129 Performance_schema in practice

130 Performance_schema in practice
ps_helper how to use ? Demo unzip dbahelper-master.zip mysql -uroot -p --socket=/tmp/mysql.sock < ps_helper_56.sql update performance_schema.setup_instruments set enabled = 'YES'; update performance_schema.setup_consumers set enabled = 'YES';

131 ps_helper Getting started with the performance_schema
Developed by Mark Leith, a MySQL Oracle worked at MySQL Support Senior Software Manager in the MEM team A collection of views, stored procedure Installed in a separate schema (ps_helper) The views are self explaining Related talk : Performance Schema and ps_helper [CON4077] Performance Schema and ps_helper [CON4077] Making the Performance Schema Easier to Use [CON5282] Improving Performance with MySQL Performance Schema [HOL9733]

132 Installing ps_helper ps_helper is a fantastic troubleshooting tool
unzip dbahelper-master.zip mysql -uroot -p --socket=/tmp/mysql.sock < ps_helper_56.sql update performance_schema.setup_instruments set enabled = 'YES'; update performance_schema.setup_consumers set enabled = 'YES'; (note that the 2 lines enable everything, good for testing, not optimal for production).

133 How to use ? The performance_schema has a small overhead
You can use it to collect data on your load. The performance_schema has a small overhead This overhead is getting optimized Just use the ps_helper views ! You can collect data like this : Call ps_helper.truncate_all(1) or server restart Enable the required instruments and consumers Run the load Dump the ps_helper views to a file set names utf8; tee PS_helper_data_pan13_noSQL2.txt; use ps_helper; select * from check_lost_instrumentation ; select * from innodb_buffer_stats_by_schema ; select * from innodb_buffer_stats_by_schema_raw ; select * from innodb_buffer_stats_by_table ; select * from innodb_buffer_stats_by_table_raw ; select * from io_by_thread_by_latency ; select * from io_by_thread_by_latency_raw ; select * from io_global_by_file_by_bytes ; select * from io_global_by_file_by_bytes_raw ; select * from io_global_by_file_by_latency ; select * from io_global_by_file_by_latency_raw ; select * from io_global_by_wait_by_bytes ; select * from io_global_by_wait_by_bytes_raw ; select * from io_global_by_wait_by_latency ; select * from io_global_by_wait_by_latency_raw ; select * from latest_file_io ; select * from latest_file_io_raw ; select * from processlist ; select * from processlist_raw ; select * from schema_index_statistics ; select * from schema_index_statistics_raw ; select * from schema_object_overview ; select * from schema_table_statistics ; select * from schema_table_statistics_raw ; select * from schema_table_statistics_with_buffer ; select * from schema_table_statistics_with_buffer_raw ; select * from schema_tables_with_full_table_scans ; select * from schema_unused_indexes ; select * from statement_analysis ; select * from statement_analysis_raw ; select * from statements_with_errors_or_warnings ; select * from statements_with_full_table_scans ; select * from statements_with_runtimes_in_95th_percentile ; select * from statements_with_runtimes_in_95th_percentile_raw; select * from statements_with_sorting ; select * from statements_with_temp_tables ; select * from user_summary ; select * from user_summary_by_stages ; select * from user_summary_by_stages_raw ; select * from user_summary_by_statement_type ; select * from user_summary_by_statement_type_raw ; select * from user_summary_raw ; select * from version ; select * from wait_classes_global_by_avg_latency ; select * from wait_classes_global_by_avg_latency_raw ; select * from wait_classes_global_by_latency ; select * from wait_classes_global_by_latency_raw ; select * from waits_by_user_by_latency ; select * from waits_by_user_by_latency_raw ; select * from waits_global_by_latency ; select * from waits_global_by_latency_raw ; select EVENT_NAME as nm, COUNT_STAR as cnt, SUM_TIMER_WAIT as tm from performance_schema.events_waits_summary_global_by_event_name where COUNT_STAR > 0 order by 3 desc limit 20; SHOW GLOBAL VARIABLES; SHOW GLOBAL STATUS; SHOW ENGINE INNODB STATUS\G SHOW FULL PROCESSLIST; SHOW ENGINE PERFORMANCE_SCHEMA STATUS\G notee;

134 Demo (1) Troubleshooting WordPress performance
WordPress uses a lot of queries The code is very difficult to understand Plugins can run expensive queries The search is very inefficient on large databases Has scalability issues with a lot of posts

135 Analysing ps_helper results : SQL

136 Analysing ps_helper results : noSQL

137 Demo Using MEM 3.0 GA SQL NoSQL

138 Demo Using MEM 3.0 GA SQL NoSQL

139 Demo Using MEM 3.0 GA SQL NoSQL

140 Demo : query analysis Troubleshooting : the application has a problem !

141 Source of Database Performance Problems
SQL Indexes Schema Changes Data Growth Hardware

142 MySQL Enterprise Monitor 3.0 is GA !
SLA monitoring Real-time performance monitoring Alerts & notifications MySQL best practice advisors "The MySQL Enterprise Monitor is an absolute must for any DBA who takes his work seriously.” - Adrian Baumann, System Specialist Federal Office of Information Technology & Telecommunications

143 Questions & Answers

144 Credits Wikimedia for images
Big thank to the reviewers of the tutorial : Shane Bester

145


Download ppt "Enhancing Productivity with MySQL 5.6 New Features"

Similar presentations


Ads by Google