You are on page 1of 34

Oracle Export and Import / Data Pump

These tools are used to transfer data from one oracle database to another oracle database. You
Export tool to export data from source database, and Import tool to load data into the target
database. When you export tables from source database export tool will extracts the tables and
puts it into the dump file. This dump file is transferred to the target database. At the target
database the Import tool will copy the data from dump file to the target database.
From Ver. 10g / 11g Oracle has also released Data Pump Export and Import tools, which are
enhanced versions of original Export and Import tools.

Datapump introduced in Oracle 10g whereas conventional exp/imp was used for
logical backups in prior versions of oracle 10g. Exp/imp works even in all versions of
Oracle.

Conventional exp/imp can utilize the client machine resource for taking the backups
but, the datapump works only in server.
Datapump operates on a group of files called dump file sets. However, normal export
operates on a single file.
Datapump access files in the server (using ORACLE directories). Traditional export
can access files in client and server both (not using ORACLE directories).
Exports (exp/imp) represent database metadata information as DDLs in the dump file,
but in datapump, it represents in XML document format.
Datapump has parallel execution but in exp/imp single stream execution.
Datapump does not support sequential media like tapes, but traditional export supports.
Impdp/Expdp use parallel execution rather than a single stream of execution, for
improved performance.

Page 1 of 34

Data Pump will recreate the user, whereas the old imp utility required the DBA to
create the user ID before importing.

In Data Pump, we can stop and restart the jobs.


Why expdp is faster than exp (or) why Data Pump is faster than conventional
export/import
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump,
how it will know from where to resume - Whenever Data Pump export or import is
running, Oracle will create a table with the JOB_NAME and will be deleted once the
job is done. From this table, Oracle will find out how much job has completed and
from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be
FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be
FULL or SCHEMA or TABLE.

How to import only metadata? - CONTENT= METADATA_ONLY

How to import into different user/tablespace/datafile/table?

REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA

Datapump gives 15 50% performance improvement than exp/imp.

Page 2 of 34

Export and import can be taken over the network using database links even without
generating the dump file using NETWORK_LINK parameter.
CONTENT parameter gives the freedom for what to export with options METADATA
ONLY, DATA, BOTH.
Few parameter name changes in datapump and it always makes confusion with
parameters in normal exp/imp
SLNO

EXP/IMP Parameter

EXPDP/IMPDP Parameter

owner

schemas

file

dumpfile

log

logfile/nologfile

IMP: fromuser, touser

IMPDP: remap_schema

Oracle Data Pump (expdp/impdp) Benefits / Advantages &


Disadvantages

Data Pump Overview


Data pump introduced in oracle 10g and it is entirely different from the normal export/import.
Similar to export/import using data pump we can migrate the data from one database to
another database running on different operating system. DBMS_DATAPUMP package can be
used to implement API and you can access the data pump utility programmatically.
Expdp user process initializes the server side process/job and that writes the data into the disk
on the server. This server process runs independently of the user process. A database directory
with read, write is needed for the data pump operation.

Data pump Advantages


Better control on the job running it provides features like start, stop, restart
Improved performance because of It is server side technology with parallel
streams option
Using the parallel streams option, data pump can backup large volume of data
quickly

Page 3 of 34

Data pump is 15-50% faster than the conventional export import.


It has the ability to estimate the job times
Failed jobs can be restarted
Using exclude/include option we can perform fine-grained object selection
Backup jobs can be monitored
It has the remapping capabilities
It supports the export/import operations over the network. The
NETWORK_LINK parameter initiate the export using a database link
Using Query parameter DBA can extract the data from tables like SELECT
Content parameter gives the flexibility for what to import/export. For
example Metadata only, data or both
It supports full range of data types
It supports cross platform compatibility
No need to specify the buffer size like in normal exp/imp
It has its own performace tuning features
V$session_longops view can be used for the time estimation for the data pump
jobs
It supports interactive mode that allows the dba to monitor or interact with
ongoing jobs
Dumps will be in compressed
Data can be encrypted
XML schemas and XML type is supported by the Data Pump

Page 4 of 34

Disadvantages
Export cannot be taken to tape
Import will work only with Oracle 10g or above
Cannot use with Unix pipes

Comparison between EXP and EXPDP


Related Views
DBA_DATAPUMP_JOBS
USER_DATAPUMP_JOBS
DBA_DIRECTORIES
DATABASE_EXPORT_OBJECTS
SCHEMA_EXPORT_OBJECTS
TABLE_EXPORT_OBJECTS
DBA_DATAPUMP_SESSIONS

Some of the Examples for in Real Time and Interview


Point:1. Data Pump (expdp/impdp) Tuning features - PARALLEL
option for faster performance behavior
Some Facts about DATA PUMP Parallel=n option
By default the value of the parallel option is 1
Export job creates those many threads specified in the parallel option.
The value of the parallel option can be modified in interactive mode.
This option used with %U clause in the filename parameter of the expdp/impdp

Page 5 of 34

Resource consumption can be controlled using the parallel option. PARALLEL is the
only tuning parameter that is specific to the Data Pump
It is recommended that the value of the parameter should not be more than 2 times of
number of CPUs in the database server for the optimum performance
Example : expdp trans/trans SCHEMAS=trans DIRECTORY=exportdir
DUMPFILE=exptrans%U.dmp PARALLEL=3
In transportable tablespace export, the degree of parallelism cannot be greater than 1.
If a substitution variable (%U) were specified along with the PARALLEL parameter,
then one file for each template is initially created. More files are created from the
templates as they are needed based on how much data is being exported and how many
parallel processes are given work to perform during the job.
During import (impdp) the PARALLEL parameter value should not be larger than the
number of files in the dumpset

2. Data Pump EXPDP IMPDP EXCLUDE and INCLUDE Options


Features and examples

Data Pump provides fine filtering of objects during the export or import through
this exclude and include feature. We can use these exclude and include options
with both the EXPDP and IMPDP utilities. It is kind of object exception marking
during the expdp or impdp.
If you use exclude parameter with data pump, all the objects except the objects
mentioned in the EXCLUDE will be considered for the operation. I feel like it is
very good feature with data pump. EXCLUDE and INCLUDE is applicable for
the database objects like tables, indexes, triggers, procedures etc.. In the
traditional exp/imp utility we have different options for different objects in the
database and that too limited to some certain objects like table=<list of tables>

Page 6 of 34

indexes=N etc.. In data pump it is more flexible as you can include multiple
objects with multiple clauses. See below the examples.
Table partitions are the exception for the EXCLUDE option in data pump. See
below mentioned link.

Datapump Exclude Table


Syntax
INCLUDE=object_type[:name_clause] [,object_type[:name_clause]]
EXCLUDE=object_type[:name_clause] [,object_type[:name_clause]]
In the name_clause you can use the expressions with operators like IN, NOT IN,
LIKE, =, and so on.. to filter the objects according to your requirement.
Only limitation or disadvantage I could see is it cannot use with SCHEMAS
option with the datapump. See the error message if you try exclude option for
schema operations.
$expdp exclude=TABLES:"LIKE 'SCOTT.EXAM%'" directory=exp_dir
dumpfile=scott.dmp logfile=exp_scott.log

3. Data Pump impdp expdp NETWORK_LINK option : Transfer


schema across database using db links without dump file!

Using the NETWORK_LINK option you can import the schema from source
database to target database. One advantage of this option you dont need export
and import as it does the export and import in single shot from the source to
destination. Also, the file system space is not needed to accommodate the huge
dump files as we can directly import to target using network_link.
It is very amazing option with data pump. You can take the backup of source

Page 7 of 34

database schema from another database and you can store in dump files in target
location as well.
See the examples below. Here we have two databases prod8 (source) and
prod9(target)
SQL> select name from v$database;
NAME
--------PROD8
SQL> show user
USER is "SCOTT"
SQL> select * from tab;
no rows selected
SQL> create table example_tab1 as select * from all_objects;
Table created.
SQL> select * from tab;
TNAME

TABTYPE CLUSTERID

------------------------------ ------- ---------EXAMPLE_TAB1

TABLE

I have added a TNS entry (File location:


$ORACLE_HOME/network/admin/tnsnames.ora) for prod8 in my prod9 database box.
Entry as below:

Page 8 of 34

prod8 =
(description =
(address =
(protocol = tcp)
(host = devdata.abc.diamond.net)
(port = 1522)
)
(connect_data =
(server = dedicated)
(sid = prod8)
)
)
Test the connectivity using the tnsping utility

$ tnsping prod8
TNS Ping Utility for Solaris: Version 11.1.0.7.0 - Production on 05-JUL-2011 22:26:12
Copyright (c) 1997, 2008, Oracle. All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (description = (address = (protocol = tcp) (host =
devdata.abc.diamond.net) (port = 1522)) (connect_data = (server = dedicated) (sid =
prod8)))

Page 9 of 34

OK (20 msec)
Connect to prod9 using sqlplus and create a database link to prod8 with scott user
$ sqlplus

SQL*Plus: Release 11.1.0.7.0 - Production on Tue Jul 5 22:26:20 2011

Copyright (c) 1982, 2008, Oracle. All rights reserved.


Enter user-name: scott/tiger
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> create database link prod8 connect to scott identified by scott using 'prod8';
Database link created.
SQL> select * from tab@prod8
TNAME

TABTYPE CLUSTERID

------------------------------ ------- ---------EXAMPLE_TAB1

TABLE

Database link is working and ready from the database prod9 to prod8
Now I am going to import the scott schema of prod8 database to prod9 database
without dumpfile. See below
$ impdp scott/tiger directory=exp_dir logfile=impnetworkscott.log
network_link=prod8

Page 10 of 34

Import: Release 11.1.0.7.0 - 64bit Production on Tuesday, 05 July, 2011 23:55:37


Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_IMPORT_SCHEMA_01": scott/********
directory=exp_dir logfile=impnetworkscott.log network_link=prod8
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"SCOTT" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "SCOTT"."EXAMPLE_TAB1"

95307 rows

Job "SCOTT"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 23:58:04


Verify whether it is imported or not. Please ignore the error because the schema already
exist in the target.
SQL> select name from v$database;
NAME

Page 11 of 34

--------PROD9
SQL> show user
USER is "SCOTT"
SQL> select * from tab;
TNAME

TABTYPE CLUSTERID

------------------------------ ------- ---------DEPT

TABLE

EMP1

TABLE

EMP2

TABLE

EXAMPLE

TABLE

EXAMPLE_PARTITION
EXAMPLE_TAB1
GT_EMP
TEST

TABLE
TABLE

TABLE
TABLE

8 rows selected.
Yes. table EXAMPLE_TAB1 has been imported without dumpfile to prod9
database!!!!
Next example is taking the schema export from source database from target machine.
You can store the dump in files.
$ expdp scott/tiger directory=exp_dir dumpfile=networkscott.dmp
logfile=networkscott.log network_link=prod8
Export: Release 11.1.0.7.0 - 64bit Production on Tuesday, 05 July, 2011 23:29:50

Page 12 of 34

Copyright (c) 2003, 2007, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********
directory=exp_dir dumpfile=networkscott.dmp logfile=networkscott.log
network_link=prod8
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 12 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "SCOTT"."EXAMPLE_TAB1"

9.496 MB 95307 rows

Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded


**********************************************************************
********
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/scott/networkscott.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 23:34:26

Page 13 of 34

**********************************************************************************

4. Data Pump impdp expdp : SQLFILE option to extract DDL and


DML from dump file

Using data pump impdp utility we can generate sql or DDL/DML from the dump file
using SQLFILE option. When you execute impdp with sqlfile option it wont import
the data into the actual tables or into the schema. Suppose if you wanted to generate
some particular DDLs from the database you can use this option. Please find the
example below with all syntaxes.

$ expdp scott/tiger directory=exp_dir dumpfile=scott.dmp logfile=scott.log


Export: Release 11.1.0.7.0 - 64bit Production on Tuesday, 05 July, 2011 21:16:13

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/********
directory=exp_dir dumpfile=scott.dmp logfile=scott.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 307.5 MB

Page 14 of 34

Processing object type SCHEMA_EXPORT/USER


Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type
SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type
SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."TEST"

262.2 MB 2251900 rows

. . exported "SCOTT"."EXAMPLE_PARTITION"

837.4 KB 95120 rows

. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P1"

441.3 KB 49999 rows

. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"

408.3 KB 45121 rows

. . exported "SCOTT"."DEPT"

5.945 KB

4 rows

. . exported "SCOTT"."EMP1"

5.875 KB

2 rows

. . exported "SCOTT"."EMP2"

5.890 KB

3 rows

Page 15 of 34

Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded


**********************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/scott/scott.dmp
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 21:21:25

Do the import with impdp utility using sqlfile option. The argument value must be a
file name where the sqls will get spooled from the dump file. Better use with .sql
extension.
$ impdp scott/tiger directory=exp_dir dumpfile=scott.dmp sqlfile=script.sql

Import: Release 11.1.0.7.0 - 64bit Production on Tuesday, 05 July, 2011 21:34:36

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_SQL_FILE_FULL_01": scott/******** directory=exp_dir
dumpfile=scott.dmp sqlfile=script.sql
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Page 16 of 34

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA


Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type
SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type
SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SCOTT"."SYS_SQL_FILE_FULL_01" successfully completed at 21:34:53

The content of the scripts.sql file would be like this.


-- CONNECT SCOTT
ALTER SESSION SET EDITION = "ORA$BASE";
-- new object type path: SCHEMA_EXPORT/USER
-- CONNECT SYSTEM
ALTER SESSION SET EDITION = "ORA$BASE";
CREATE USER "SCOTT" IDENTIFIED BY VALUES
'S:D846EA3EB87287A3AED08AF38EB0B4F640F49A9A4A972108BF3917B769;D
B1B37F84BDF15E6'
DEFAULT TABLESPACE "USERS"
TEMPORARY TABLESPACE "TEMP";

Page 17 of 34

-- new object type path: SCHEMA_EXPORT/SYSTEM_GRANT


GRANT UNLIMITED TABLESPACE TO "SCOTT";
-- new object type path: SCHEMA_EXPORT/ROLE_GRANT
GRANT "DBA" TO "SCOTT";
-- new object type path: SCHEMA_EXPORT/DEFAULT_ROLE
ALTER USER "SCOTT" DEFAULT ROLE ALL;

-- new object type path: SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA


-- CONNECT SCOTT
ALTER SESSION SET EDITION = "ORA$BASE";

BEGIN
sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERE
NV','CURRENT_SCHEMA'), export_db_name=>'PROD9.DIAMOND.COM',
inst_scn=>
'11626845804212');
COMMIT;
END;
/

-- new object type path: SCHEMA_EXPORT/SEQUENCE/SEQUENCE


CREATE SEQUENCE "SCOTT"."SEQSCOTT" MINVALUE 1 MAXVALUE
999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20
NOORDER NOCYCLE

Page 18 of 34

-- new object type path: SCHEMA_EXPORT/TABLE/TABLE


CREATE TABLE "SCOTT"."EXAMPLE"
(

"ID" NUMBER(10,0) NOT NULL ENABLE,


"UID" VARCHAR2(40 BYTE),
"PIX" VARCHAR2(40 BYTE),
"FNAME" VARCHAR2(100 BYTE),
"MNAME" VARCHAR2(100 BYTE),
"LNAME" VARCHAR2(100 BYTE),
"SFIX" VARCHAR2(40 BYTE),

5. Oracle 11g Data Pump expdp compression option to


reduce the export dump file:Oracle 11g provides different types of data compression techniques. Compression is
the option to achieve the data compression in data pump. There are 4 options available
with compression parameter.
ALL: Both metadata and data are compressed.
DATA_ONLY: Only data is compressed.
METADATA_ONLY: Only metadata is compressed. This is the default setting.
NONE: Nothing is compressed
In Oracle 10g data pump, there is no data compression option; only METADATA
compression available with 10g. In Oracle 11g we dont need to use UNIX
compression options as it is available with data pump itself.
------------------------------------------------------------------------------------------------------------------------

Page 19 of 34

6. Data Pump EXPDP : How to EXCLUDE table partition


explained with example:Data pump will not do exclude for the table partitions. If you use exclude=table:IN
(EXAMPLE:EXAMPLE_P2) in the expdp, it will just ignore the exclude and it will
perform the full table export with all the partitions for the table. To achieve this goal
you
have
to
use
data
pump
API
package
using
the
DBMS_DATAPUMP.DATA_FILTER.
Also Refer this Table_data Option
See the example below.
In this example I have a table example with 2 partitions.
SQL> select partition_name, table_name from user_tab_partitions ;

PARTITION_NAME

TABLE_NAME

------------------------------ -----------------------------EXAMPLE_P2

EXAMPLE

EXAMPLE_P1

EXAMPLE

I wanted to export only example_p2 partition. That means in this example I am going
to exclude example_p1 partition. Please find the DBMS_DATAPUMP API code for
this purpose below.
Connect to sqlplus and execute below attached PL/SQL code
declare
rvalue number;

Page 20 of 34

begin
rvalue := dbms_datapump.open (operation => 'EXPORT',
job_mode => 'TABLE');

dbms_datapump.add_file (handle

=> rvalue,

filename => 'EXP_PART_EXCLUDE.DMP',


directory => 'EXP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE);

dbms_datapump.add_file (handle

=> rvalue,

filename => 'EXP_PART_EXCLUDE.LOG',


directory => 'EXP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

dbms_datapump.metadata_filter (handle => rvalue,


name => 'SCHEMA_EXPR',
value => 'IN (''SCOTT'')');

dbms_datapump.metadata_filter (handle => rvalue,


name => 'NAME_EXPR',
value => 'IN (''EXAMPLE'')');

Page 21 of 34

dbms_datapump.data_filter (handle

=> rvalue,

name

=> 'PARTITION_LIST',

value

=> '''EXAMPLE_P2''',

table_name => 'EXAMPLE',


schema_name => 'SCOTT');

dbms_datapump.start_job (handle => rvalue);


dbms_datapump.detach (handle => rvalue);
end;
/
PL/SQL procedure successfully completed.

Check whether export dump is created in the exp_dir directory.

$ ls -ltr EXP_PART_EXCLUDE*
-rw-r----- 1 oracle dba

4096 Jun 28 02:04 EXP_PART_EXCLUDE.DMP

-rw-r--r-- 1 oracle dba

137 Jun 28 02:05 EXP_PART_EXCLUDE.LOG

Verify the expdp logfile. See in the log file you can see it export only EXAMPLE_P2
partition only.

$ tail -f EXPDAT.LOG

Page 22 of 34

Starting "SCOTT"."SYS_EXPORT_TABLE_04":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 640 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type
TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"

408.3 KB 45121 rows

Master table "SCOTT"."SYS_EXPORT_TABLE_04" successfully loaded/unloaded


**********************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_04 is:
/home/oracle/SCOTT/EXPDAT.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_04" successfully completed at 01:59:32

Suppose if you use exclude parameter in expdp, it will not consider the partitions. See
below mentioned example.
expdp exclude=TABLES:"IN ('EXAMPLE:EXAMPLE_P1')" directory=exp_dir
dumpfile=scott_PART.dmp logfile=exp_scott_PART.log
$ tail -f exp_scott_PART.LOG
Starting "SCOTT"."SYS_EXPORT_TABLE_04":

Page 23 of 34

Estimate in progress using BLOCKS method...


Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 640 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type
TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"

208.8 KB 25027 rows

. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"

408.3 KB 45121 rows

Master table "SCOTT"."SYS_EXPORT_TABLE_04" successfully loaded/unloaded


**********************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_04 is:
/home/oracle/SCOTT/EXPDAT.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_04" successfully completed at 03:23:32

7. Data Pump expdp REUSE_DUMPFILE option: Overwrite


existing dumpfile:This is the option with data pump expdp utility. Normally when you perform the export
using expdp utility and if the dumpfile is present in the export directory it will throw an
error ORA-27038: created file already exists. This situation happens when you
wanted to perform the repetitive exports using the same dumpfile.
Oracle provides an option reuse_dumpfile=[Y/N] to avoid this error. You should

Page 24 of 34

mention the parameter value as Y to overwrite the existing dump file. By default the
option considered as N. See the examples below.
Normal scenario with file already present in the export directory

$ expdp scott/tiger directory=exp_dir dumpfile=tde.dmp tables=example


Export: Release 11.1.0.7.0 - 64bit Production on Tuesday, 19 July, 2011 1:36:50
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "/home/oracle/scott/tde.dmp"
ORA-27038: created file already exists
Additional information: 1

Execute the expdp using REUSE_DUMPFILES


$ expdp scott/tiger directory=exp_dir dumpfile=tde.dmp tables=example
reuse_dumpfiles=y
Export: Release 11.1.0.7.0 - 64bit Production on Tuesday, 19 July, 2011 1:46:05
Copyright (c) 2003, 2007, Oracle. All rights reserved.

Page 25 of 34

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/******** directory=exp_dir
dumpfile=tde.dmp tables=example reuse_dumpfiles=y
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.312 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P1"

441.3 KB 49999 rows

. . exported "SCOTT"."EXAMPLE":"EXAMPLE_P2"

408.3 KB 45121 rows

Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded


**********************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
/home/oracle/scott/tde.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 01:49:16

8. Oracle Data Pump expdp impdp Use JOB_NAME Option to


Stop_job, Attach, Kill_job and Continue_client interactively:One of the main advantage of datapump is you can suspend the running export or
import job and it can be resume if needed. Suppose if your server load is high when

Page 26 of 34

you started the export job, you can suspend the job and later you can resume the job
once the server load comes down. One more feature is you can suspend the job from
one client machine and can be resume from different client.

See one example below


Once you press ctrl C on the expdp window it will come to interactive mode with
Export> prompt. In that prompt you can give commands to stop_job or kill_job
$ expdp scott/tiger schemas=scott directory=exp_dir dumpfile=exp_schema.dmp
logfile=exp_schema.log job_name=expschema

Export: Release 11.1.0.7.0 - 64bit Production on Tuesday, 02 August, 2011 22:17:58


Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."EXPSCHEMA": scott/******** schemas=scott
directory=exp_dir dumpfile=exp_schema.dmp logfile=exp_schema.log
job_name=expschema
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
^C
Export> stop_job=immediate
Are you sure you wish to stop this job ([yes]/no): yes
oracle@prod(4113) prod9 /home/oracle/scott
You can use dba_datapump_jobs view to get the details of the datapump jobs.

Page 27 of 34

$ sqlplus "/ as sysdba"


SQL*Plus: Release 11.1.0.7.0 - Production on Tue Aug 2 22:19:43 2011

Copyright (c) 1982, 2008, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from


dba_datapump_jobs;

OWNER_NAME

JOB_NAME

------------------------------ -----------------------------OPERATION

JOB_MODE

------------------------------ -----------------------------STATE
-----------------------------SCOTT
EXPORT

EXPSCHEMA
SCHEMA

NOT RUNNING
Using the below mentioned command you can resume the job. Once you fire the below
command in the prompt, expdp will load the job details and come export> prompt. You
have to give continue_client command to resume the job.

$ expdp scott/tiger attach=EXPSCHEMA

Page 28 of 34

Export: Release 11.1.0.7.0 - 64bit Production on Tuesday, 02 August, 2011 22:25:09


Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Job: EXPSCHEMA
Owner: SCOTT
Operation: EXPORT
Creator Privs: TRUE
GUID: A993FD0301520998E04400144F9F5BAA
Start Time: Tuesday, 02 August, 2011 22:25:12
Mode: SCHEMA
Instance: prod9
Max Parallelism: 1
EXPORT Job Parameters:
Parameter Name

Parameter Value:

CLIENT_COMMAND
scott/******** schemas=scott directory=exp_dir
dumpfile=exp_schema.dmp logfile=exp_schema.log job_name=expschema
State: IDLING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /home/oracle/scott/exp_schema.dmp

Page 29 of 34

bytes written: 4,096


Worker 1 Status:
Process Name: DW01
State: UNDEFINED
Export> continue_client
Job EXPSCHEMA has been reopened at Tuesday, 02 August, 2011 22:25
Restarting "SCOTT"."EXPSCHEMA": scott/******** schemas=scott
directory=exp_dir dumpfile=exp_schema.dmp logfile=exp_schema.log
job_name=expschema
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
^C
Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes
Once you kill the export job it will remove the details from the dba_datapump_jobs.
$ sqlplus "/ as sysdba"
SQL*Plus: Release 11.1.0.7.0 - Production on Tue Aug 2 22:26:46 2011
Copyright (c) 1982, 2008, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from
dba_datapump_jobs;
no rows selected

Page 30 of 34

9. Oracle 10g 11g Data Pump EXPDP Query Parameter option:-

Datapump query option is used to export the subset of table data according to the
WHERE filter clause. Please find the examples below.
EXPDP with parameter file- parfile

Suppose if you wanted to export 2 tables using where clause. For each table you can
specify the where clause.
SQL> select count(*) from object_list where object_name like 'EIM%';
COUNT(*)
---------2388224
Parfile Content:
userid="/ as sysdba"
job_name=query_export

Page 31 of 34

query=test.OBJECT_LIST:"WHERE object_name like 'EIM%'",


test.candidate:"WHERE NAME='James'"
tables=test.object_list, test.candidate
directory=EXP_DIR
dumpfile=QUERY_EXP.dmp
logfile=QUERY_EXP.log
$ expdp parfile=exp.par
Export: Release 11.2.0.2.0 - Production on Fri Jan 27 00:49:35 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit
Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options
Starting "SYS"."QUERY_EXPORT": /******** AS SYSDBA parfile=exp.par
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.678 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TEST"."OBJECT_LIST"

218.6 MB 2388224 rows

Page 32 of 34

. . exported "TEST"."CANDIDATE"

5.453 KB

1 rows

Master table "SYS"."QUERY_EXPORT" successfully loaded/unloaded


**********************************************************************
Dump file set for SYS.QUERY_EXPORT is:
/home/oracle/shony/QUERY_EXP.dmp
Job "SYS"."QUERY_EXPORT" successfully completed at 00:49:55

EXPDP Command line option with QUERY


$ expdp job_name=query_export query=test.OBJECT_LIST:\"WHERE object_name
like \'EIM\%\'\", test.candidate:\"WHERE NAME=\'James\'\" tables=test.object_list,
test.candidate directory=EXP_DIR dumpfile=QUERY_EXP.dmp
logfile=QUERY_EXP.log
Export: Release 11.2.0.2.0 - Production on Fri Jan 27 01:10:15 2012
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Username: / as sysdba
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit
Production
With the Partitioning, Real Application Clusters, Automatic Storage Management,
OLAP,
Data Mining and Real Application Testing options

Page 33 of 34

Starting "SYS"."QUERY_EXPORT": /******** AS SYSDBA


job_name=query_export query=test.OBJECT_LIST:"WHERE object_name like 'EIM
%'", test.candidate:"WHERE NAME='James'" tables=test.object_list, test.candidate
directory=EXP_DIR dumpfile=QUERY_EXP.dmp logfile=QUERY_EXP.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2.678 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "TEST"."OBJECT_LIST"

218.6 MB 2388224 rows

. . exported "TEST"."CANDIDATE"

5.453 KB

1 rows

Master table "SYS"."QUERY_EXPORT" successfully loaded/unloaded


**********************************************************************
Dump file set for SYS.QUERY_EXPORT is:
/home/oracle/shony/QUERY_EXP.dmp
Job "SYS"."QUERY_EXPORT" successfully completed at 01:10:42

Page 34 of 34

You might also like