Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Pump in Oracle Database 10 g : Foundation for Ultra-High Speed Data Movement Utilities Session id: 40043 George H. Claborn Data Pump Technical Project.

Similar presentations


Presentation on theme: "Data Pump in Oracle Database 10 g : Foundation for Ultra-High Speed Data Movement Utilities Session id: 40043 George H. Claborn Data Pump Technical Project."— Presentation transcript:

1

2 Data Pump in Oracle Database 10 g : Foundation for Ultra-High Speed Data Movement Utilities Session id: 40043 George H. Claborn Data Pump Technical Project Leader Oracle Corporation New England Development Center

3 Data Pump: Overview  What is it?  Main Features  Architecture  Performance  Things to keep in mind  Some thoughts on original exp / imp

4 Data Pump: What is it?  Server-based facility for high performance loading and unloading of data and metadata  Callable: DBMS_DATAPUMP. Internally uses DBMS_METADATA  Data written in Direct Path stream format. Metadata written as XML  New clients expdp and impdp: Supersets of original exp / imp.  Foundation for Streams, Logical Standby, Grid, Transportable Tablespaces and Data Mining initial instantiation.

5 Features: Performance!!  Automatic, two-level parallelism – Direct Path for inter-partition parallelism – External Tables for intra-partition parallelism – Simple: parallel= – Dynamic: Workers can be added and removed from a running job in Enterprise Edition – Index builds automatically “parallelized” up to degree of job  Simultaneous data and metadata unload  Single thread of data unload: 1.5-2X exp  Single thread of data load: 15X-40X imp  With index builds: 4-10X imp

6 Features: Checkpoint / Restart  Job progress recorded in a “Master Table”  May be explicitly stopped and restarted later: – Stop after current item finishes or stop immediate  Abnormally terminated job is also restartable  Current objects can be skipped on restart if problematic

7 Features: Network Mode  Network import: Load one database directly from another  Network export: Unload a remote database to a local dumpfile set – Allows export of read-only databases  Data Pump runs locally, Metadata API runs remotely.  Uses DB links / listener service names, not pipes. Data is moved as ‘insert into select from @service_name’  Direct path engine is used on both ends  It’s easy to swamp network bandwidth: Be careful!

8 Features: Fine-Grained Object Selection  All object types are supported for both operations: export and import  Exclude: Specified object types are excluded from the operation  Include: Only the specified object types are included. E.g, just retrieve packages, functions and procedures  More than one of each can be specified, but use of both is prohibited by new clients  Both take an optional name filter for even finer granularity: – INCLUDE PACKAGE: “LIKE ‘PAYROLL%’ “ – EXCLUDE TABLE: “IN (‘FOO’,’BAR’, … )’ “

9 Features: Monitoring  Flexible GET_STATUS call  Per-worker status showing current object and percent done  Initial job space estimate and overall percent done  Job state and description  Work-in-progress and errors

10 Features: Dump File Set Management  Directory based: E.g, DMPDIR :export01.dmp where DMPDIR created as: SQL> create directory dmpdir as ‘/data/dumps’  Multiple, wildcarded file specifications supported: dumpfile=dmp1dir:full1%u.dmp, dmp2dir:full2%u.dmp – Files are created as needed on a round-robin basis from available file specifications  File size can be limited for manageability  Dump file set coherency automatically maintained

11 New Clients – expdp / impdp  Similar (but not identical) look and feel to exp / imp  All modes supported: full, schema, table, tablespace, transportable. Superset of exp / imp  Flashback is supported  Query supported by both expdp and impdp… and on a per-table basis!  Detach from and attach to running jobs  Multiple clients per job allowed; but a single client can attach to only one job at a time  If privileged, attach to and control other users’ jobs

12 New Clients – expdp / impdp  Interactive mode entered via Ctl-C: – ADD_FILE: Add dump files and wildcard specs. to job – PARALLEL: Dynamically add or remove workers – STATUS: Get detailed per-worker status and change reporting interval – STOP_JOB{=IMMEDIATE}: Stop job, leaving it restartable. Immediate doesn’t wait for workers to finish current work items… they’ll be re-done at restart – START_JOB: Restart a previously stopped job – KILL_JOB: Stop job and delete all its resources (master table, dump files) leaving it unrestartable – CONTINUE: Leave interactive mode, continue logging – EXIT: Exit client, leave job running

13 Features: Other Cool Stuff …  DDL transformations are easy with XML: – REMAP_SCHEMA – REMAP_TABLESPACE – REMAP_DATAFILE – Segment and storage attributes can be suppressed  Can extract and load just data, just metadata or both  SQLFILE operation generates executable DDL script  If a table pre-exists at load time, you can: skip it (default), replace it, truncate then load or append to it.  Space estimates based on allocated blocks (default) or statistics if available  Enterprise Manager interface integrates 9 i and 10g  Callable!

14 Architecture: Block Diagram Direct Path API External Table API Oracle_ Loader Metadata API: DBMS_METADATA Oracle_ DataPump expdpimpdp Enterprise Manager Data Pump DBMS_DATAPUMP Data/Metadata movement engine Other Clients: Data Mining, etc

15 Architecture: Flow Diagram Master Control Process User A’s shadow process Dump File Set: Data, metadata & master table User A: expdp User B’s shadow process Dynamic commands (stop, start, parallel etc) Master table Command and control queue Status queue: Work-in-progress and errors Worker B: Direct path Worker C: Ext. Table Worker A: metadata User B: OEM Parallel Proc. 01 Parallel Proc. 02 Log File

16 No Clients Required ! Master Control Process Dump File Set: Data, metadata & master table Master table Worker B: Direct path Worker C: Ext. Table Worker A: metadata Parallel Proc. 01 Parallel Proc. 02 Log File

17 Data Pump: Performance Tuning  Default initialization parameters are fine! – Make sure disk_asynch_io remains TRUE  Spread the I/O!  Parallel= no more than 2X number of CPUs: Do not exceed disk spindle capacity. – Corollary: SPREAD THE I/O !!!  Sufficient SGA for AQ messaging and metadata API queries  Sufficient rollback for long running queries That’s it!

18 Large Internet Company 2 Fact Tables: 16.2M rows, 2 Gb ProgramElapsed exp out of the box: direct=y 0 hr 10 min 40 sec exp tuned: direct=y buffer=2M recordlength=64K 0 hr 04 min 08 sec expdp out of the box: Parallel=1 0 hr 03 min 12 sec imp out of the box 2 hr 26 min 10 sec imp tuned: buffer=2M recordlength=64K 2 hr 18 min 37 sec impdp out of the box: Parallel=1 0 hr 03 min 05 sec With one index per table imp tuned: buffer=2M recordlength=64K 2 hr 40 min 17 sec impdp: Parallel=1 0 hr 25 min 10 sec

19 Oracle Applications Seed Database:  Metadata intensive: 392K objects, 200 schemas, 10K tables, 2.1 Gb of data total  Original exp / imp total: 32 hrs 50 min – exp: 2 hr 13 min imp: 30 hrs 37 min.  Data Pump expdp / impdp total: 15 hrs 40 min – expdp: 1 hr 55 min impdp: 13 hrs 45 min. – Parallel=2 for both expdp and impdp

20 Keep in Mind:  Designed for *big* jobs with lots of data. – Metadata performance is about the same – More complex infrastructure, longer startup  XML is bigger than DDL, but much more flexible  Data format in dump files is ~15% more compact than exp  Import subsetting is accomplished by pruning the Master Table

21 Original exp and imp  Original imp will be supported forever to allow loading of V5 – V9i dump files  Original exp will ship at least in 10 g, but may not support all new functionality.  9i exp may be used for downgrades from 10 g  Original and Data Pump dump file formats are not compatible

22 10g Beta Feedback  British Telecom: Ian Crocker, Performance & Storage Consultant “We have tested Oracle Data Pump, the new Oracle10 g Export and Import Utilities. Data Pump Export performed twice as fast as Original Export, and Data Pump Import performed ten times faster than Original Import. The new manageability features should give us much greater flexibility in monitoring job status.”  Airbus Deutschland: Werner Kawollek, Operation Application Management “We have tested the Oracle Data Pump Export and Import utilities and are impressed by their rich functionality. First tests have shown tremendous performance gains in comparison to the original export and import utilities"

23 Please visit our Demo! Oracle Database 10g Data Pump: Faster and Better Export / Import Try out Data Pump’s Tutorial in the Oracle By Example (OBE) area: “Unloading and Loading Data Base Contents”

24 Q & Q U E S T I O N S A N S W E R S


Download ppt "Data Pump in Oracle Database 10 g : Foundation for Ultra-High Speed Data Movement Utilities Session id: 40043 George H. Claborn Data Pump Technical Project."

Similar presentations


Ads by Google