Professional Documents
Culture Documents
These tools are used to transfer data from one oracle database to another oracle database. You
Export tool to export data from source database, and Import tool to load data into the target
database. When you export tables from source database export tool will extracts the tables and
puts it into the dump file. This dump file is transferred to the target database. At the target
database the Import tool will copy the data from dump file to the target database.
From Ver. 10g / 11g Oracle has also released Data Pump Export and Import tools, which are
enhanced versions of original Export and Import tools.
Datapump introduced in Oracle 10g whereas conventional exp/imp was used for
logical backups in prior versions of oracle 10g. Exp/imp works even in all versions of
Oracle.
Conventional exp/imp can utilize the client machine resource for taking the backups
but, the datapump works only in server.
Datapump operates on a group of files called dump file sets. However, normal export
operates on a single file.
Datapump access files in the server (using ORACLE directories). Traditional export
can access files in client and server both (not using ORACLE directories).
Exports (exp/imp) represent database metadata information as DDLs in the dump file,
but in datapump, it represents in XML document format.
Datapump has parallel execution but in exp/imp single stream execution.
Datapump does not support sequential media like tapes, but traditional export supports.
Impdp/Expdp use parallel execution rather than a single stream of execution, for
improved performance.
Page 1 of 34
Data Pump will recreate the user, whereas the old imp utility required the DBA to
create the user ID before importing.
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
Page 2 of 34
Export and import can be taken over the network using database links even without
generating the dump file using NETWORK_LINK parameter.
CONTENT parameter gives the freedom for what to export with options METADATA
ONLY, DATA, BOTH.
Few parameter name changes in datapump and it always makes confusion with
parameters in normal exp/imp
SLNO
EXP/IMP Parameter
EXPDP/IMPDP Parameter
owner
schemas
file
dumpfile
log
logfile/nologfile
IMPDP: remap_schema
Page 3 of 34
Page 4 of 34
Disadvantages
Export cannot be taken to tape
Import will work only with Oracle 10g or above
Cannot use with Unix pipes
Page 5 of 34
Resource consumption can be controlled using the parallel option. PARALLEL is the
only tuning parameter that is specific to the Data Pump
It is recommended that the value of the parameter should not be more than 2 times of
number of CPUs in the database server for the optimum performance
Example : expdp trans/trans SCHEMAS=trans DIRECTORY=exportdir
DUMPFILE=exptrans%U.dmp PARALLEL=3
In transportable tablespace export, the degree of parallelism cannot be greater than 1.
If a substitution variable (%U) were specified along with the PARALLEL parameter,
then one file for each template is initially created. More files are created from the
templates as they are needed based on how much data is being exported and how many
parallel processes are given work to perform during the job.
During import (impdp) the PARALLEL parameter value should not be larger than the
number of files in the dumpset
Data Pump provides fine filtering of objects during the export or import through
this exclude and include feature. We can use these exclude and include options
with both the EXPDP and IMPDP utilities. It is kind of object exception marking
during the expdp or impdp.
If you use exclude parameter with data pump, all the objects except the objects
mentioned in the EXCLUDE will be considered for the operation. I feel like it is
very good feature with data pump. EXCLUDE and INCLUDE is applicable for
the database objects like tables, indexes, triggers, procedures etc.. In the
traditional exp/imp utility we have different options for different objects in the
database and that too limited to some certain objects like table=<list of tables>
Page 6 of 34
indexes=N etc.. In data pump it is more flexible as you can include multiple
objects with multiple clauses. See below the examples.
Table partitions are the exception for the EXCLUDE option in data pump. See
below mentioned link.
Using the NETWORK_LINK option you can import the schema from source
database to target database. One advantage of this option you dont need export
and import as it does the export and import in single shot from the source to
destination. Also, the file system space is not needed to accommodate the huge
dump files as we can directly import to target using network_link.
It is very amazing option with data pump. You can take the backup of source
Page 7 of 34
database schema from another database and you can store in dump files in target
location as well.
See the examples below. Here we have two databases prod8 (source) and
prod9(target)
SQL> select name from v$database;
NAME
--------PROD8
SQL> show user
USER is "SCOTT"
SQL> select * from tab;
no rows selected
SQL> create table example_tab1 as select * from all_objects;
Table created.
SQL> select * from tab;
TNAME
TABTYPE CLUSTERID
TABLE
Page 8 of 34
prod8 =
(description =
(address =
(protocol = tcp)
(host = devdata.abc.diamond.net)
(port = 1522)
)
(connect_data =
(server = dedicated)
(sid = prod8)
)
)
Test the connectivity using the tnsping utility
$ tnsping prod8
TNS Ping Utility for Solaris: Version 11.1.0.7.0 - Production on 05-JUL-2011 22:26:12
Copyright (c) 1997, 2008, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (description = (address = (protocol = tcp) (host =
devdata.abc.diamond.net) (port = 1522)) (connect_data = (server = dedicated) (sid =
prod8)))
Page 9 of 34
OK (20 msec)
Connect to prod9 using sqlplus and create a database link to prod8 with scott user
$ sqlplus
TABTYPE CLUSTERID
TABLE
Database link is working and ready from the database prod9 to prod8
Now I am going to import the scott schema of prod8 database to prod9 database
without dumpfile. See below
$ impdp scott/tiger directory=exp_dir logfile=impnetworkscott.log
network_link=prod8
Page 10 of 34
95307 rows
Page 11 of 34
--------PROD9
SQL> show user
USER is "SCOTT"
SQL> select * from tab;
TNAME
TABTYPE CLUSTERID
TABLE
EMP1
TABLE
EMP2
TABLE
EXAMPLE
TABLE
EXAMPLE_PARTITION
EXAMPLE_TAB1
GT_EMP
TEST
TABLE
TABLE
TABLE
TABLE
8 rows selected.
Yes. table EXAMPLE_TAB1 has been imported without dumpfile to prod9
database!!!!
Next example is taking the schema export from source database from target machine.
You can store the dump in files.
$ expdp scott/tiger directory=exp_dir dumpfile=networkscott.dmp
logfile=networkscott.log network_link=prod8
Export: Release 11.1.0.7.0 - 64bit Production on Tuesday, 05 July, 2011 23:29:50
Page 12 of 34
Page 13 of 34
**********************************************************************************
Using data pump impdp utility we can generate sql or DDL/DML from the dump file
using SQLFILE option. When you execute impdp with sqlfile option it wont import
the data into the actual tables or into the schema. Suppose if you wanted to generate
some particular DDLs from the database you can use this option. Please find the
example below with all syntaxes.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********
directory=exp_dir dumpfile=scott.dmp logfile=scott.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 307.5 MB
Page 14 of 34
. . exported "SCOTT"."EXAMPLE_PARTITION"
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P1"
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"
. . exported "SCOTT"."DEPT"
5.945 KB
4 rows
. . exported "SCOTT"."EMP1"
5.875 KB
2 rows
. . exported "SCOTT"."EMP2"
5.890 KB
3 rows
Page 15 of 34
Do the import with impdp utility using sqlfile option. The argument value must be a
file name where the sqls will get spooled from the dump file. Better use with .sql
extension.
$ impdp scott/tiger directory=exp_dir dumpfile=scott.dmp sqlfile=script.sql
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_SQL_FILE_FULL_01": scott/******** directory=exp_dir
dumpfile=scott.dmp sqlfile=script.sql
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Page 16 of 34
Page 17 of 34
BEGIN
sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERE
NV','CURRENT_SCHEMA'), export_db_name=>'PROD9.DIAMOND.COM',
inst_scn=>
'11626845804212');
COMMIT;
END;
/
Page 18 of 34
Page 19 of 34
PARTITION_NAME
TABLE_NAME
------------------------------ -----------------------------EXAMPLE_P2
EXAMPLE
EXAMPLE_P1
EXAMPLE
I wanted to export only example_p2 partition. That means in this example I am going
to exclude example_p1 partition. Please find the DBMS_DATAPUMP API code for
this purpose below.
Connect to sqlplus and execute below attached PL/SQL code
declare
rvalue number;
Page 20 of 34
begin
rvalue := dbms_datapump.open (operation => 'EXPORT',
job_mode => 'TABLE');
dbms_datapump.add_file (handle
=> rvalue,
dbms_datapump.add_file (handle
=> rvalue,
Page 21 of 34
dbms_datapump.data_filter (handle
=> rvalue,
name
=> 'PARTITION_LIST',
value
=> '''EXAMPLE_P2''',
$ ls -ltr EXP_PART_EXCLUDE*
-rw-r----- 1 oracle dba
Verify the expdp logfile. See in the log file you can see it export only EXAMPLE_P2
partition only.
$ tail -f EXPDAT.LOG
Page 22 of 34
Starting "SCOTT"."SYS_EXPORT_TABLE_04":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 640 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type
TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"
Suppose if you use exclude parameter in expdp, it will not consider the partitions. See
below mentioned example.
expdp exclude=TABLES:"IN ('EXAMPLE:EXAMPLE_P1')" directory=exp_dir
dumpfile=scott_PART.dmp logfile=exp_scott_PART.log
$ tail -f exp_scott_PART.LOG
Starting "SCOTT"."SYS_EXPORT_TABLE_04":
Page 23 of 34
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"
Page 24 of 34
mention the parameter value as Y to overwrite the existing dump file. By default the
option considered as N. See the examples below.
Normal scenario with file already present in the export directory
Page 25 of 34
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/******** directory=exp_dir
dumpfile=tde.dmp tables=example reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.312 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P1"
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"
Page 26 of 34
you started the export job, you can suspend the job and later you can resume the job
once the server load comes down. One more feature is you can suspend the job from
one client machine and can be resume from different client.
Page 27 of 34
OWNER_NAME
JOB_NAME
------------------------------ -----------------------------OPERATION
JOB_MODE
------------------------------ -----------------------------STATE
-----------------------------SCOTT
EXPORT
EXPSCHEMA
SCHEMA
NOT RUNNING
Using the below mentioned command you can resume the job. Once you fire the below
command in the prompt, expdp will load the job details and come export> prompt. You
have to give continue_client command to resume the job.
Page 28 of 34
Job: EXPSCHEMA
Owner: SCOTT
Operation: EXPORT
Creator Privs: TRUE
GUID: A993FD0301520998E04400144F9F5BAA
Start Time: Tuesday, 02 August, 2011 22:25:12
Mode: SCHEMA
Instance: prod9
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name
Parameter Value:
CLIENT_COMMAND
scott/******** schemas=scott directory=exp_dir
dumpfile=exp_schema.dmp logfile=exp_schema.log job_name=expschema
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/scott/exp_schema.dmp
Page 29 of 34
Page 30 of 34
Datapump query option is used to export the subset of table data according to the
WHERE filter clause. Please find the examples below.
EXPDP with parameter file- parfile
Suppose if you wanted to export 2 tables using where clause. For each table you can
specify the where clause.
SQL> select count(*) from object_list where object_name like 'EIM%';
COUNT(*)
---------2388224
Parfile Content:
userid="/ as sysdba"
job_name=query_export
Page 31 of 34
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit
Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options
Starting "SYS"."QUERY_EXPORT": /******** AS SYSDBA parfile=exp.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.678 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TEST"."OBJECT_LIST"
Page 32 of 34
. . exported "TEST"."CANDIDATE"
5.453 KB
1 rows
Page 33 of 34
. . exported "TEST"."CANDIDATE"
5.453 KB
1 rows
Page 34 of 34