Presentation is loading. Please wait.

Presentation is loading. Please wait.

Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12c Databases At Warp Speed Jim Czuprynski Zero Defect Computing, Inc. Session #UGF3500.

Similar presentations


Presentation on theme: "Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12c Databases At Warp Speed Jim Czuprynski Zero Defect Computing, Inc. Session #UGF3500."— Presentation transcript:

1 Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12c Databases At Warp Speed Jim Czuprynski Zero Defect Computing, Inc. Session #UGF3500

2 My Credentials ■30+ years of database-centric IT experience ■Oracle DBA since 2001 ■Oracle 9i, 10g, 11g OCP and Oracle ACE Director ■> 100 articles on databasejournal.com and ioug.org ■Teach core Oracle DBA courses (Grid + RAC, Exadata, Performance Tuning, Data Guard) ■Regular speaker at Oracle OpenWorld, IOUG COLLABORATE, OUG Norway, and Hotsos ■Oracle-centric blog (Generally, It Depends)

3 Upgrading to 12c: What’s the Rush? ■12c Release 1 … is actually 12c Release 2 ■12.1.0.2 offers significant enhancements ▪PDB Enhancements ▪Big Table and Full Database Caching ▪In-Memory Aggregation ▪In-Memory Column Store ■Support for 11gR2 expires as of 12-2015 ▪11gR2 Database issue resolution costs will escalate dramatically New!New!

4 Our Agenda ■Fresher on Multi-Tenancy Databases: CDBs, PDBs, and PDB Migration Methods ■Cloning a New PDB from “Scratch” ■Cloning New PDB from Existing PDBs ■“Replugging” Existing PDBs ■Migrating a Non-CDB to a PDB ■RMAN Enhancements ■Q+A

5 Multi-Tenancy: CDBs and PDBs The next database release offers a completely new multi-tenancy architecture for databases and instances: ■A Container Database (CDBs) comprises one or more Pluggable Databases (PDBs) ■CDBs are databases that contain common elements shared with PDBs ■PDBs comparable to traditional databases in prior releases … ■…but PDBs offer extreme flexibility for cloning, upgrading, and application workload localization

6 CDBs and Common Objects A CDB owns in common: Control files and SPFILE Online and archived redo logs Backup sets and image copies Each CDB has one SYSTEM, SYSAUX, UNDO, and TEMP tablespace Oracle-supplied data dictionary objects, users, and roles are shared globally between CDB and all PDBs PDB1 PDB3 PDB2 CDB1 SPFILE ORLs Control Files ARLs Backups Image Copies SYSTEM UNDOTBS1 SYSAUX TEMP CDB$ROOTCDB$ROOT Data Dictionary Roles Users CDBs and PDBs share common objects

7 PDBs and Local Objects PDBs also own local objects PDBs have a local SYSTEM and SYSAUX tablespace PDBs may have their own local TEMP tablespace PDBs can own one or more application schemas: Local tablespaces Local users and roles PDBs own all application objects within their schemas By default, PDBs can only see their own objects PDB1 PDB3 PDB2 CDB1 SYSTEM SYSAUX TEMP SYSTEM SYSAUX TEMP SYSTEM SYSAUX TEMP AP_DATA HR_DATA MFG_DATA AP_ROLE AP HR_ROLE HR MFG_ROLE MFG

8 Shared Memory and Processes CDBs and PDBs also share common memory and background processes All PDBs share same SGA and PGA All PDBs share same background processes OLTP: Intense random reads and writes (DBWn and LGWR) DW/DSS: Intense sequential reads and/or logicaI I/O Batch and Data Loading: Intense sequential physical reads and physical writes PDB1 PDB3 PDB2 CDB1 System Storage DW + DSS OLTP BATCH + IDL SGA & PGA LGWR DBWn Others

9 Sharing: It’s a Good Thing! Sharing common resources - when it makes sense - tends to reduce contention as well as needless resource over-allocation: ■Not all PDBs demand high CPU cycles ■Not all PDBs have same memory demands ■Not all PDBs have same I/O bandwidth needs ▪DSS/DW: MBPS ▪OLTP: IOPS and Latency Result: More instances with less hardware

10 PDBs: Ultra-Fast Provisioning Four ways to provision PDBs: 1.Clone from PDB$SEED 2.Clone from existing PDB 3.“Replugging” previously “unplugged” PDB 4.Plug in non-CDB as new PDB All PDBs already plugged into CDB stay alive during these operations! PDB1 11gR2 DB CDB1 PDB3 PDB$ SEED PDB4 PDB2 PDB5

11 Cloning From PDB$SEED

12 Prerequisites to Oracle 12cR1 PDB Cloning ■A valid Container Database (CDB) must already exist ■The CDB must permit pluggable databases ■Sufficient space for new PDB’s database files must exist

13 Defining PDB Database File Destinations ■Declare the new PDB’s destination directory: CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_adm IDENTIFIED BY “P@5$w0rD” ROLES=(CONNECT) FILE_NAME_CONVERT FILE_NAME_CONVERT = (‘/u01/app/oracle/oradata/pdbseed’, ‘/u01/app/oracle/oradata/dev_ap’); CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_adm IDENTIFIED BY “P@5$w0rD” ROLES=(CONNECT) FILE_NAME_CONVERT FILE_NAME_CONVERT = (‘/u01/app/oracle/oradata/pdbseed’, ‘/u01/app/oracle/oradata/dev_ap’); ParameterUsed During FILE_NAME_CONVERTInline during cloning DB_CREATE_FILE_DESTIn parameter file during cloning and database creation PDB_FILE_NAME_CONVERTIn parameter file during cloning of PDBs only

14 Cloning From the PDB Seed Database A new PDB is cloned in a matter of seconds CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_adm IDENTIFIED BY “P@5$w0rD” ROLES=(CONNECT); CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_adm IDENTIFIED BY “P@5$w0rD” ROLES=(CONNECT); 1 1 From CDB1 instance’s alert log: CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_admin ADMIN USER dev_ap_admin IDENTIFIED BY * ROLES=(CONNECT) IDENTIFIED BY * ROLES=(CONNECT) Tue Apr 08 16:50:18 2014 **************************************************************** Pluggable Database DEV_AP with pdb id - 4 is created as UNUSABLE. If any errors are encountered before the pdb is marked as NEW, then the pdb must be dropped **************************************************************** Deleting old file#5 from file$ Deleting old file#7 from file$ Adding new file#45 to file$(old file#5) Adding new file#46 to file$(old file#7) Successfully created internal service dev_ap at open ALTER SYSTEM: Flushing buffer cache inst=0 container=4 local **************************************************************** Post plug operations are now complete. Pluggable database DEV_AP with pdb id - 4 is now marked as NEW. **************************************************************** Completed: CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_admin ADMIN USER dev_ap_admin IDENTIFIED BY * ROLES=(CONNECT) IDENTIFIED BY * ROLES=(CONNECT)

15 Completing PDB Cloning Operations Once the PDB$SEED tablespaces are cloned, the new PDB must be opened in READ WRITE mode: From CDB1 instance’s alert log: alter pluggable database dev_ap open Tue Apr 08 16:51:39 2014 Pluggable database DEV_AP dictionary check beginning Pluggable Database DEV_AP Dictionary check complete Due to limited space in shared pool (need 6094848 bytes, have 3981120 bytes), limiting Resource Manager entities from 2048 to 32 Opening pdb DEV_AP (4) with no Resource Manager plan active Tue Apr 08 16:51:56 2014 XDB installed. XDB initialized. Pluggable database DEV_AP opened read write alter pluggable database dev_ap open Completed: alter pluggable database dev_ap open SQL> ALTER PLUGGABLE DATABASE dev_ap OPEN; 2 2

16 Cloning from Existing PDBs

17 Cloning a New PDB From Another PDB Declare the new PDB’s destination directory: *.DB_CREATE_FILE_DEST = ‘/u01/app/oracle/oradata/qa_ap’ … or … *.PDB_FILE_NAME_CONVERT_DEST = ‘/u01/app/oracle/oradata/qa_ap’ *.DB_CREATE_FILE_DEST = ‘/u01/app/oracle/oradata/qa_ap’ … or … *.PDB_FILE_NAME_CONVERT_DEST = ‘/u01/app/oracle/oradata/qa_ap’ 1 1 Clone the target PDB: SQL> CREATE PLUGGABLE DATABASE qa_ap FROM prod_ap; SQL> CREATE PLUGGABLE DATABASE qa_ap FROM prod_ap; 3 3 SQL> CONNECT / AS SYSDBA; SQL> ALTER PLUGGABLE DATABASE prod_ap CLOSE IMMEDIATE; SQL> ALTER PLUGGABLE DATABASE prod_ap READ ONLY; SQL> CONNECT / AS SYSDBA; SQL> ALTER PLUGGABLE DATABASE prod_ap CLOSE IMMEDIATE; SQL> ALTER PLUGGABLE DATABASE prod_ap READ ONLY; Connect as CDB$ROOT and quiesce the source PDB in READ ONLY mode: 2 2

18 Cloning From the PDB Seed Database From CDB1 instance’s alert log: Mon Mar 31 08:02:52 2014 ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local Pluggable database PROD_AP closed Completed: ALTER PLUGGABLE DATABASE prod_ap CLOSE IMMEDIATE ALTER PLUGGABLE DATABASE prod_ap OPEN READ ONLY Mon Mar 31 08:03:03 2014 Due to limited space in shared pool (need 6094848 bytes, have 3981120 bytes), limiting Resource Manager entities from 2048 to 32 Opening pdb PROD_AP (3) with no Resource Manager plan active Pluggable database PROD_AP opened read only Completed: ALTER PLUGGABLE DATABASE prod_ap OPEN READ ONLY CREATE PLUGGABLE DATABASE qa_ap FROM prod_ap Mon Mar 31 08:06:16 2014 **************************************************************** Pluggable Database QA_AP with pdb id - 4 is created as UNUSABLE. If any errors are encountered before the pdb is marked as NEW, then the pdb must be dropped **************************************************************** Deleting old file#8 from file$ Deleting old file#9 from file$ Deleting old file#10 from file$... Deleting old file#21 from file$ Deleting old file#22 from file$ 2 2 3 3 > > Adding new file#23 to file$(old file#8) Adding new file#24 to file$(old file#9)... Adding new file#28 to file$(old file#14) Marking tablespace #7 invalid since it is not present in the describe file Marking tablespace #8 invalid since it is not present in the describe file Marking tablespace #9 invalid since it is not present in the describe file... Marking tablespace #12 invalid since it is not present in the describe file Marking tablespace #13 invalid since it is not present in the describe file Marking tablespace #14 invalid since it is not present in the describe file Successfully created internal service qa_ap at open ALTER SYSTEM: Flushing buffer cache inst=0 container=4 local **************************************************************** Post plug operations are now complete. Pluggable database QA_AP with pdb id - 4 is now marked as NEW. **************************************************************** Completed: CREATE PLUGGABLE DATABASE qa_ap FROM prod_ap

19 Completing PDB Cloning Operations Once the QA_AP database has been cloned, it must be opened in READ WRITE mode: SQL> ALTER PLUGGABLE DATABASE qa_ap OPEN; 4 4 From CDB1 instance’s alert log:... alter pluggable database qa_ap open Mon Mar 31 08:11:47 2014 Pluggable database QA_AP dictionary check beginning Pluggable Database QA_AP Dictionary check complete Due to limited space in shared pool (need 6094848 bytes, have 3981120 bytes), limiting Resource Manager entities from 2048 to 32 Opening pdb QA_AP (4) with no Resource Manager plan active Mon Mar 31 08:11:59 2014 XDB installed. XDB initialized. Pluggable database QA_AP opened read write alter pluggable database qa_ap open Completed: alter pluggable database qa_ap open 4 4

20 “Replugging” an Existing PDB

21 “Unplugging” An Existing PDB CDB1 Connect as CDB$ROOT on CDB1, then shut down the source PDB: SQL> CONNECT / AS SYSDBA; SQL> ALTER PLUGGABLE DATABASE qa_ap CLOSE IMMEDIATE; SQL> CONNECT / AS SYSDBA; SQL> ALTER PLUGGABLE DATABASE qa_ap CLOSE IMMEDIATE; 1 1 “Unplug” the existing PDB from its current CDB: SQL> ALTER PLUGGABLE DATABASE qa_ap /home/oracle/qa_ap.xml UNPLUG INTO ‘/home/oracle/qa_ap.xml’; SQL> ALTER PLUGGABLE DATABASE qa_ap /home/oracle/qa_ap.xml UNPLUG INTO ‘/home/oracle/qa_ap.xml’; 2 2 Drop the unplugged PDB from its current CDB: SQL> DROP PLUGGABLE DATABASE qa_ap; 3 3

22 “Replugging” An Existing PDB “Replug” the existing PDB into its new CDB CDB2 Connect as CDB$ROOT at CDB2: SQL> CONNECT / AS SYSDBA; 1 1 SQL> CREATE PLUGGABLE DATABASE qa_ap /home/oracle/qa_ap.xml USING ‘/home/oracle/qa_ap.xml’ NOCOPY; SQL> CREATE PLUGGABLE DATABASE qa_ap /home/oracle/qa_ap.xml USING ‘/home/oracle/qa_ap.xml’ NOCOPY; 2 2 CDB2 Check PDB’s compatibility with CDB2: SET SERVEROUTPUT ON DECLARE compat BOOLEAN; BEGIN compat := DBMS_PDB.CHECK_PLUG_COMPATIBILITY( /home/oracle/qa_ap.xml pdb_descr_file => '/home/oracle/qa_ap.xml', qa_ap,pdb_name => ‘qa_ap'); DBMS_OUTPUT.PUT_LINE('PDB compatible? ‘ || compat); END; / SET SERVEROUTPUT ON DECLARE compat BOOLEAN; BEGIN compat := DBMS_PDB.CHECK_PLUG_COMPATIBILITY( /home/oracle/qa_ap.xml pdb_descr_file => '/home/oracle/qa_ap.xml', qa_ap,pdb_name => ‘qa_ap'); DBMS_OUTPUT.PUT_LINE('PDB compatible? ‘ || compat); END; / 3 3 Open the replugged PDB in READ WRITE mode SQL> ALTER PLUGGABLE DATABASE qa_ap READ WRITE; 4 4

23 Upgrating To 12cR1: Plugging In Non-CDB As PDB

24 Upgrating* a Non-CDB To a PDB A pre-12cR1 database can be upgrated* to a 12cR1 PDB ■Either … ▪Upgrade the source database to 12cR1 non-CDB ▪Plug upgraded non-CDB into existing CDB as new PDB ■… or: ▪Clone new empty PDB into existing CDB from PDB$SEED ▪Migrate data from source database to newly-cloned PDB *WARNING: As a member of POEM, I am qualified to make up words. For your own safety, please do not try this without a certified POEM member present; poor grammar and misspelling may result.

25 Migrating Data From Previous Releases Depending on application downtime requirements:  Oracle GoldenGate  Low downtime  Separate (perhaps expensive) licensing required  Cross-Platform Transportable Tablespaces  12cR1 integrates TTS operations with DataPump metadata migration automatically  DataPump Full Tablespace Export (FTE)  12cR1 integrates DataPump full export metadata migration with TTS automatically

26 Cross-Platform Transport (CPT) +DATA+DATA +FRA+FRA ARCHIVELOG ON COMPATIBLE >= 12.0 OPEN READ WRITE Ora12101 +DATA+DATA +FRA+FRA Ora12102 ARCHIVELOG ON COMPATIBLE = 12.0 OPEN READ WRITE DBMS_FILE_TRANSFER Copy datafile backup sets to 12.1.0.2 database 2 2 RMAN> RESTORE FOREIGN TABLESPACE ap_data, ap_idx FORMAT ‘+DATA’ FROM BACKUPSET ‘+FRA’ DUMP FILE FROM BACKUPSET ‘/home/oracle/dp_fte.dmp’; RMAN> RESTORE FOREIGN TABLESPACE ap_data, ap_idx FORMAT ‘+DATA’ FROM BACKUPSET ‘+FRA’ DUMP FILE FROM BACKUPSET ‘/home/oracle/dp_fte.dmp’; Restore tablespaces into 12.1.0.2 non-CDB or PDB 3 3 Back up READ ONLY tablespaces as backup set from 12.1.0.1 1 1 RMAN> BACKUP FOR TRANSPORT FORMAT ‘+DATA’ DATAPUMP FORMAT ‘/home/oracle/dp_fte.dmp’ TABLESPACE ap_data, ap_idx; RMAN> BACKUP FOR TRANSPORT FORMAT ‘+DATA’ DATAPUMP FORMAT ‘/home/oracle/dp_fte.dmp’ TABLESPACE ap_data, ap_idx;

27 Full Transportable Export/Import (FTE) +DATA+DATA +FRA+FRA ARCHIVELOG ON COMPATIBLE >= 11.2.0.3 OPEN READ WRITE Ora11203 +DATA+DATA +FRA+FRA Ora12010 ARCHIVELOG ON COMPATIBLE = 12.0 OPEN READ WRITE Export contents of entire 11.2.0.3 database DUMPFILE=fte_ora11g.dmp LOGFILE=fte_ora11g.log TRANSPORTABLE=ALWAYS VERSION=12.0 FULL=Y DUMPFILE=fte_ora11g.dmp LOGFILE=fte_ora11g.log TRANSPORTABLE=ALWAYS VERSION=12.0 FULL=Y $> expdp system/****** parfile=fte_11203.dpectl 1 1 DBMS_FILE_TRANSFER Copy datafiles & metadata dump set to 12.1.0.1 database 2 2 $> impdp system/****** parfile=fti_11203.dpctl DIRECTORY=DPDIR DUMPFILE=fte_ora11g.dmp LOGFILE=fti_prod_api.log FULL=Y TRANSPORT_DATAFILES= … DIRECTORY=DPDIR DUMPFILE=fte_ora11g.dmp LOGFILE=fti_prod_api.log FULL=Y TRANSPORT_DATAFILES= … Plug non- system datafiles & all objects into 12.1.0.1 database 3 3

28 New PDB Features in Release 12.1.0.2

29 PDBs: Accessibility & Management ■CONTAINERS Clause ▪Queries can be executed in a CDB across identically-named objects in different PDBs ■OMF File Placement ▪CREATE_FILE_DEST controls default location of all new files in PDB (really useful for shared storage!) ■Improved State Management on CDB Restart ▪SAVE STATE: PDB automatically reopened ▪DISCARD STATE: PDB left in default state (MOUNT) ■ LOGGING Clause ▪Controls whether any future tablespaces are created in LOGGING or NOLOGGING mode New!New!

30 PDBs: Cloning Enhancements ■Subset Cloning ▪USER_TABLESPACES clause captures only the desired tablespaces during PDB cloning from non-CDB or PDB ■Metadata-Only Cloning ▪NO DATA clause captures only the data dictionary definitions – but not the application data ■Remote Cloning ▪Allows cloning of a PDB or non-CDB on a remote server via a database link ■Cloning from 3 rd Party Snapshots ▪SNAPSHOT COPY clause enables cloning a PDB directly from snapshot copies stored on supported file systems (e.g. ACFS or Direct NFS) New!New!

31 Uber-Fast Database Recovery: 12cR1 Recovery Manager (RMAN) Enhancements

32 Backup, Restore, and Recover Non-CDBs, CDBs, and PDBs ■Image copy backups now support multi-section, multi-channel BACKUP operations ▪SECTION SIZE directive fully supported ▪Faster image copy file creation ■What is backed up in multiple section(s) … ■…can be restored in multi-channel fashion more quickly ■Backups for TTS can now be taken with tablespace set open in READ WRITE mode

33 BACKUP AS COPY … SECTION SIZE RMAN> # Back up just one tablespace set with SECTION SIZE BACKUP AS COPY SECTION SIZE 100M TABLESPACE ap_data, ap_idx; Starting backup at 2014-04-07 13:11:33 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 channel ORA_DISK_1: starting datafile copy input datafile file number=00014 name=+DATA/NCDB121/DATAFILE/ap_data.289.836526779 backing up blocks 1 through 12800 channel ORA_DISK_2: starting datafile copy input datafile file number=00015 name=+DATA/NCDB121/DATAFILE/ap_idx.290.836526787 backing up blocks 1 through 12800 channel ORA_DISK_3: starting datafile copy input datafile file number=00014 name=+DATA/NCDB121/DATAFILE/ap_data.289.836526779 backing up blocks 12801 through 25600 channel ORA_DISK_4: starting datafile copy input datafile file number=00014 name=+DATA/NCDB121/DATAFILE/ap_data.289.836526779 backing up blocks 25601 through 38400 output file name=+FRA/NCDB121/DATAFILE/ap_data.270.844261895 tag=TAG20140407T131133 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:04 > > channel ORA_DISK_1: starting datafile copy input datafile file number=00015 name=+DATA/NCDB121/DATAFILE/ap_idx.290.836526787 backing up blocks 12801 through 25600 output file name=+FRA/NCDB121/DATAFILE/ap_idx.298.844261895 tag=TAG20140407T131133 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:05 channel ORA_DISK_2: starting datafile copy input datafile file number=00015 name=+DATA/NCDB121/DATAFILE/ap_idx.290.836526787 backing up blocks 25601 through 38400 output file name=+FRA/NCDB121/DATAFILE/ap_data.270.844261895 tag=TAG20140407T131133 channel ORA_DISK_3: datafile copy complete, elapsed time: 00:00:05 output file name=+FRA/NCDB121/DATAFILE/ap_data.270.844261895 tag=TAG20140407T131133 channel ORA_DISK_4: datafile copy complete, elapsed time: 00:00:06 output file name=+FRA/NCDB121/DATAFILE/ap_idx.298.844261895 tag=TAG20140407T131133 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:05 output file name=+FRA/NCDB121/DATAFILE/ap_idx.298.844261895 tag=TAG20140407T131133 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:04 Finished backup at 2014-04-07 13:11:44 << or about 11 seconds total time!

34 RMAN: Table-Level Recovery +DATA+DATA +FRA+FRA +AUX+AUX Oracle DBA decides to roll back table AP.VENDORS to prior point in time (e.g. 24 hours ago), but:  FLASHBACK VERSIONS Query, FLASHBACK Query, or FLASHBACK TABLE can’t rewind far enough because UNDOTBS is exhausted  FLASHBACK … TO BEFORE DROP impossible  FLASHBACK DATABASE impractical RMAN> RECOVER TABLE ap.vendors UNTIL TIME ‘SYSDATE – 1/24’ AUXILIARY DESTINATION ‘+AUX’; ARCHIVELOG ON COMPATIBLE = 12.0 OPEN READ WRITE Appropriate RMAN backup files located on +FRA RMAN creates auxiliary destination on +AUX Tablespace(s) for AP.VENDORS restored + recovered to prior TSPITR DataPump exports table (AP.VENDORS) into dump set in +AUX DataPump imports recovered data back into +DATA from +AUX

35 Customizing Table-Level Recovery Table-level recovery is customizable: ■NOTABLEIMPORT ■NOTABLEIMPORT tells RMAN to stop before recovered objects are imported into the target database ■REMAP TABLE ■REMAP TABLE renames recovered tables and table partitions during IMPORT ■REMAP TABLESPACE ■REMAP TABLESPACE permits remapping of table partitions into different tablespaces

36 In-Memory Column Store: A Revolution in SQL Query Performance

37 In-Memory Column Store ■A new way to store data in addition to DBC ▪Can be enabled for specific tablespaces, tables, and materialized views ■Significant performance improvement for queries that: ▪Filter a large number of rows (=,, IN) ▪Select a small number of columns from a table with many columns ▪Join a small table to a larger table ▪Perform aggregation (SUM, MAX, MIN, COUNT) ■Eliminates need for multi-column indexes New!New!

38 In-Memory Column Store: Setup Allocate memory for the In-Memory Column Store, then bounce the database instance: SQL> ALTER SYSTEM SET inmemory_size = 128M SCOPE=SPFILE; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP; SQL> ALTER SYSTEM SET inmemory_size = 128M SCOPE=SPFILE; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP; 1 1 SQL> CONNECT / AS SYSDBA; SQL> ALTER TABLE ap.randomized_sorted INMEMORY MEMCOMPRESS FOR QUERY HIGH PRIORITY HIGH; SQL> CONNECT / AS SYSDBA; SQL> ALTER TABLE ap.randomized_sorted INMEMORY MEMCOMPRESS FOR QUERY HIGH PRIORITY HIGH; Add the desired table to the In- Memory Column Store: 2 2

39 SQL> ALTER SYSTEM SET inmemory_query = DISABLE; System altered. SQL> EXPLAIN PLAN FOR 2 SELECT key_sts, COUNT(*) 3 FROM ap.randomized_sorted 4 WHERE key_sts IN(10,20,30) 5 GROUP BY key_sts 6 ; Plan hash value: 1010500208 ---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 9 | 7420 (1)| 00:00:01 | | 1 | HASH GROUP BY | | 3 | 9 | 7420 (1)| 00:00:01 | |* 2 | TABLE ACCESS FULL| RANDOMIZED_SORTED | 300K| 879K| 7413 (1)| 00:00:01 | ---------------------------------------------------------------------------------------- 2 - filter("KEY_STS"=10 OR "KEY_STS"=20 OR "KEY_STS"=30) In-Memory Column Store: Results SQL> SELECT key_sts, COUNT(*) 2 FROM ap.randomized_sorted 3 WHERE key_sts IN(10,20,30) 4 GROUP BY key_sts 5 ; KEY_STS COUNT(*) ---------- 30 149260 20 100778 10 50151 Elapsed: 00:00:00.89 SQL> ALTER SYSTEM SET inmemory_query = ENABLE; System altered. SQL> EXPLAIN PLAN FOR 2 SELECT key_sts, COUNT(*) 3 FROM ap.randomized_sorted 4 WHERE key_sts IN(10,20,30) 5 GROUP BY key_sts 6 ; Plan hash value: 1010500208 ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 9 | 319 (14)| 00:00:01 | | 1 | HASH GROUP BY | | 3 | 9 | 319 (14)| 00:00:01 | INMEMORY |* 2 | TABLE ACCESS INMEMORY FULL| RANDOMIZED_SORTED | 300K| 879K| 312 (12)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - inmemory("KEY_STS"=10 OR "KEY_STS"=20 OR "KEY_STS"=30) filter("KEY_STS"=10 OR "KEY_STS"=20 OR "KEY_STS"=30) SELECT key_sts, COUNT(*) FROM ap.randomized_sorted WHERE key_sts IN(10,20,30) GROUP BY key_sts ; KEY_STS COUNT(*) ---------- 30 149260 20 100778 10 50151 Elapsed: 00:00:00.07 … a 12.7X improvement!

40 Over To You …

41 Please feel free to evaluate this session: Session #UGF3500 Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12c Databases At Warp Speed If you have any questions or comments, feel free to:  E-mail me at jczuprynski@zerodefectcomputing.com  Follow my blog (Generally, It Depends): http://jimczuprynski.wordpress.com  Connect with me on LinkedIn (Jim Czuprynski)  Follow me on Twitter (@jczuprynski) Thank You For Your Kind Attention

42 Coming Soon … Coming in May 2015 from Oracle Press: Oracle Database Upgrade, Migration & Transformation Tips & Techniques Covers everything you need to know to upgrade, migrate, and transform any Oracle 10g or 11g database to Oracle 12c Discusses strategy and tactics of planning Oracle migration, transformation, and upgrade projects Explores latest transformation features: Recovery Manager (RMAN) Oracle GoldenGate Cross-Platform Transportable Tablespaces Cross-Platform Transport (CPT) Full Transportable Export (FTE) Includes detailed sample code

43 Visit IOUG at the User Group Pavilion Stop by at the User Group Pavilion in the lobby of Moscone South and catch up with the user community! Connect with IOUG members and volunteers Pick up a discount to join the IOUG community of 20,000+ technologists strong Enter for the chance to win books from IOUG Press or a free registration to COLLABORATE 15! Visit us Sunday through Wednesday!

44 IOUG SIG Meetings at OpenWorld All meetings located in Moscone South - Room 208 Sunday, September 28 Cloud Computing SIG: 1:30 p.m. - 2:30 p.m. Monday, September 29 Exadata SIG: 2:00 p.m. - 3:00 p.m. BIWA SIG: 5:00 p.m. – 6:00 p.m. Tuesday, September 30 Internet of Things SIG: 11:00 a.m. - 12:00 p.m. Storage SIG: 4:00 p.m. - 5:00 p.m. SPARC/Solaris SIG: 5:00 p.m. - 6:00 p.m. Wednesday, October 1 Oracle Enterprise Manager SIG: 8:00 a.m. - 9:00 a.m. Big Data SIG: 10:30 a.m. - 11:30 a.m. Oracle 12c SIG: 2:00 p.m. – 3:00 p.m. Oracle Spatial and Graph SIG: 4:00 p.m. (*OTN lounge)

45 Save more than $1,000 on education offerings like pre-conference workshops Access the brand-new, specialized IOUG Strategic Leadership Program Priority access to the hands-on labs with Oracle ACE support Advance access to supplemental session material and presentations Special IOUG activities with no "ante in" needed - evening networking opportunities and more Save more than $1,000 on education offerings like pre-conference workshops Access the brand-new, specialized IOUG Strategic Leadership Program Priority access to the hands-on labs with Oracle ACE support Advance access to supplemental session material and presentations Special IOUG activities with no "ante in" needed - evening networking opportunities and more COLLABORATE 15 – IOUG Forum April 12-16, 2015 Mandalay Bay Resort and Casino Las Vegas, NV COLLABORATE 15 Call for Speakers Ends October 10 The IOUG Forum Advantage www.collaborate.ioug.org Follow us on Twitter at @IOUG or via the conference hashtag #C15LV!

46 JOIN or RENEW By Murali Vallath Releasing: Sept. 30 Oracle Enterprise Manager 12c Command-Line Interface By Kellyn Pot'vin, Seth Miller, Ray Smith Releasing: Oct. 15 Expert Oracle Database Architecture (3rd Edition) By Thomas Kyte, Darl Kuhn Releasing: Oct. 22 Did you know that IOUG Members get up to 60% off of IOUG Press eBooks?

47 Twitter: @IOUG or follow hash tag: #IOUG@IOUG Facebook: IOUG’s official Facebook Fan Page: www.ioug.org/facebook LinkedIn: Connect and network with other Oracle professionals and experts in the IOUG Community LinkedIn group. www.ioug.org/linkedin LinkedIn group.


Download ppt "Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12c Databases At Warp Speed Jim Czuprynski Zero Defect Computing, Inc. Session #UGF3500."

Similar presentations


Ads by Google