You are on page 1of 13

 

cccccccccccccccccccccccccccccccccccccccccccccccccccc


ôc EXPDP AND IMPDP are used to take backups of a database
when database is online . It is also known as online backup .
ôc Datapump was introduced in oracle 10g as a new feature. It is superset to
previous export and import .
ôc Oracle datapump enables dba to transfer the data as well as metadata
much faster than the traditional export and import.
ôc  ERE ARE REASONS W ERE ORACLE DAAPUMP IS FASER  AN
RADIIONAL EXPOR AND IMPOR
ôc Oracle datapump uses parallel streams of data to maximize its
throughput .
ôc Oracle datapump uses datapump API·s to load and unload the data from
dumpfile or database.
ôc Oracle datapump fetches the data in terms of BLOCKS where as export
and import fetches data interms of BYES.
ôc In oracle datapump everything is done on server side, it utilizes
resources on the server to perform the job.
ôc In traditional export and import most of the work is done on client side.

MAIN FEAURES:
Ñc Reorginsation of database.
Ñc Upgradation of oracle database and oracle software upgradation.
Ñc U can attach and detach the job.
Ñc User interative commands.
Ñc U can know the job status
Ñc U can estimate the size of the database before u perform logical
backup of a database by estimate_only option(dumpfile should not
be specified).
Ñc U can kill the job or can stop the while is executing
Ñc Paralled execution of jobs.
When u perform datapump jobs, it uses the datapump API that are

1.DBMS_DAAPUMP:

With these package the clients can access both expdp and impdp.

his a metadata API (DBMS_DAAPUMP) that which has guts of datapump


technology that is in form of procedures for data dictionary loading and unloading
objects.

2.DBMS_MEADAA:

With these metadata API we can select and extract appropriate metadata from the
database export and import. hese DBMS_MEADAA was available with oracle 9i.

DAAPUMP ARC IECURE

Oracle datapump takes help of some processes in background to perform its job
faster.

he process are

a)c Master process


b)c Shadow process
c)c Client process
d)c Worker process
And below two are used by master table for updation of worker
progress in term these are also used by worker process to process
the job request given by client process to server process.
e)c Control queue
f)c Stack queue

When u perform export it usually select·s the data from the database(logical backup
of database) and while u import it usually performs the INSERS AND DDL operations
on the database.

>>COMPONENS OF  E DAABASE:

Expdp it usually unloads data from database or database objects.


Impdp it is nothing but it loads the data from database or database objects.

Dumpfile is a file it unloads the data from the database into operating system
files. It also contains table data and metadata information stored in it .

Directory is an object the refers to dumpfile and logfile or location where dumpfile
and logfile resides.

Logfile is standard file where logging messages are written while u perform import
and export.

Sqlfile is special parameter file that write all DDL operations when import is
performed.

What happens when u perform export and import of the database:

First oracle datapump contains a master table created at the when u perform
exp or imp. hese master table inturn creates a process called master process.

Actual name of the master process is MASER CONROL PROCESS. he master
process is in the form of <instance_name>_Dmnn_<pid> .

he master proeess creates a worker process to handle the request of jobs given
by client process. he client process handles the request to server process to
perform the intended jobs to execute. he server process establishes a session
irrespective of client process.

he master table creates the master process, these master process updates the
master table about status of the job.

>>he master process does the following

1. Creates and manages the jobs and creates a master table to monitor the job.

2. Creates and manages the worker process.

3. Monitors the job and log the progress.

4.Manages the job and restart the information in the table.

5.Creates name for the job.


he master process can handle only one at a time. For each a master table and
master process and a worker process is created. he master table is created in the
user schemas who is performing the export.

 E MASER ABLE KEEPS EYE ON FOLLOWING:

1.c State of location of the objects in the dumpfile set.


2.c Set of parameters
3.c Status of worker process
4.c State of the objects in the export/import job

As when export is in progress the master process divides the given job into number
of tasks. he master table with help of master process schedules these tasks in the
control queue. he worker process (DWnn) usually picks these tasks from the control
queue and executes. What ever completed tasks are enqueued in a status queue that
which monitors the jobs and master table gets updated by master process by
gathering the progress from the status queue.

When datapump finishes it export, the mastertable is deleted automatically by


writing information about the master table and what type of export it has performed
in control table of a dumpfile. If u have a killed the job then master table is
automatically deleted(kill_job). If u have pressed CNRL+C OR SOP_JOB the
master table is not deleted but it is maintained so that it can be reused when resume
the job.

When u import the database object the server process checks for information and
verifies the information present in the control table of dumpfile and then checks for
master table information, any how when u perform import the server process
fetches the data from the dumpfile and performs necessary INSERS AND DDL
pertaining to the database objects import by using the master table which has key
information to perform all these operations

o know what DDL operations had occurred u can do it by using SQLFILE when u
perform import.

Worker process:

Create by the master process, intended to work on behalf of the master process.

he worker process is usually represented in the form of DWnn.

he name of worker process looks in this format <instance_name>_DWnn_pid


Creation of worker process depends on the parallelism,major part of work is done by
worker process.

Shadow process:

his actual process for creating mastertable and master process. his process
terminates as soon as job is completed.

Client process:

his process establishes connection with the server process on server mode to
perform the intend task.

When u perform a datapump export or import a mastertable is created at database


level.

o see it u have open another terminal session connect to database and by using
following query u can see the master table when expdp is running.

An extra process is running at a database level when datapump jobs are runned

SQL> Select count(*) from tab;

his extra number is nothing a master table is created at background by shadow


process. o see the master table u can use following query

SQL>Select * from dba_datapump_jobs;

o know what jobs have been performed use following query

SQL>Select opname,target_desc,sofar,totalwork from v$session_longops;

o know which user session is currently attached to a datapump job

SQL>Select sid, serial#

from v$session a,dba_datapump_sessions b

where a.saddr = b.saddr;

Related views:

dba_datapump_jobs,

dba_datapump_sessions;

v$session_longops
user_datapump_sessions

Advantages

1.c Parallel query execution:


In this set PARALLEL=3 or more which increases the speed of the operation,
but u need have same of dumpfiles as u specified a number for parallel
(dumpfile%U where U is two digit number ranging from 01 to 99)
2.c Ability to restart a job:
U can restart a job that is stopped, from the client window
3.c Ability to stop a job:
U can stop a long running job to enhance performance of the database.

4.Network mode operation:

In this u need a database link between source and remote database, by using the
dumpfile of the source database u can perform datapump export and import on a
remote database i.e., the dump file of source database can be used by remote
database for exports and imports without need to create a dump file on the remote
database

5.Filtering of metadata:

ere can filter the database objects as well as metadata of database by using

CONTENT OPTION AND INCLUDE AND EXCLUDE:

CONTENT:

Filters the metadata information by using options

1.ALL

2.DATAFILES_ONLY

3.METADATA_ONLY

INCLUDE AND EXCLUDE:

What u want to include and exclude during the import and export of
database.

DIRECT LOADING AND EXTERNAL TABLES:


Direct loading:

Direct loading uses the internal stream format that is same with the dumpfile.

Or otherwise it loads the data directly from datafiles without make using of the
database buffer caches and extract and format the data and writes to the
dumpfiles. While importing it reads the dumpfiles and contents , format writes them
directly to the datafiles without using the database buffer cache. While loading the
data it uses the concept of the high water mark and when load of data is completed
it adjusts the high water mark after writing to the blocks. During this operation of
importing and exporting it keeps SGA at minimum without effecting the end users .

External loadings:

During loading or unloading it uses the database buffer cache to extract the data
from database. While importing it uses the database buffer cache for loading the
data into datafiles and its also uses the SGA for executing and to perform complex
inserting and DDL tasks during loading.

c Oracle chooses what is best for doing these operations.

ôc Oracle uses external loading for some of this operations:

1.c Active triggers on tables.


2.c Composite partitions
3.c Referential integrity constraints.
4.c A partition with global indexes.
5.c ables with domain indexes.
6.c Fine grained access to table with insert option.

EXAMPLES:

>Before u perform datapump export check streams_pool_size although datapump


uses SGA to perform export if it is not sufficient it uses streams_pool_size

Set this parameter

SQL>alter system set streams_pool_size=32m;

1.c At first u need create a directory from the database level


SQL>create directory dump as ¶/disk1/oradata/rac/dump·;
2.c Create respective directory at operating system level from oracle user level.
Oracleuser$> mkdir ²p /disk1/oradata/rac/dump
3.c At database level check for directory by querying the view
SQL>select * from dba_directories;
c Directory are named objects that datapump maps to specific location of

operating system files.

c Default directory created at database creation time or while updating a


database is DAA_PUMP_DIR these is mapped to the object dpdump
directory.
]c Default directory location is $ORACLE_BASE/ADMIN/LOG

OR $ORACLE_ OME/ADMIN/LOG

]c If $ORACLE_BASE is available it uses that location otherwise it uses


$ORACLE_ OME location as default

c If u have exported directory of ur own and it is not present then it writes to


LOG file location present at $ORACLE_ OME/rdbms/log

ôc Only SYSEM and SYSDBA can use default directory or user with sysdba or
system priviledge can use the default directory.

Specifying default directory:

c U can export the directory by overwriting the default directory

ôc $>export DAA_PUMP_DIR=DUMP

c Now this is ur default directory although even when dpdump exists.

]c U can also do ur export without DIRECORY parameter


]c Actually by default system user can perform export without specifying the

directory parameter in expdp.

ôc $>expdp system/manager full=y dumpfile=full1.dmp logfile=full1.log


Granting roles and privileges to user:

c Even users can perform datapump expdp and impdp if they granted with
following options;

SQL>create user u1 identified by u1;

SQL>grant connect, resource to u1;

c hen to perform u must write permission on the directory what u have


created and must have granted with a role to perform export

SQL> grant read, write on directory dump to u1;

SQL> grant exp_full_database , imp_full_database to u1;

SQL> conn u1/u1

Ñc Create a table X and insert some hundreds of records

SQL>conn / as sysdba

c Now take a logical backup of database with user u1

Full database export:

ôc $>expdp u1/u1 directory=dump dumpfile=f1.dmp logfile=f1.log full=y


c ere directory is optional as u have exported the directory at operating
system level
c After export is completed it connect to user u1 drop the table, now import
the data

able-level:

c $> impdp u1/u1 directory=dump dumpfile=f1.dmp logfile=f1.log tables=u1.X


c After completion of Import u can check the user u1 whether he got table back
or not

Remap_schema:

c And also remap the schema of one user say u1 to other user assume u2

$>impdp u1/u1 directory=dump dumpfile=f1.dmp logfile=f1.log

remap_schema=u1:u2

Estimate_only:
ôc U can estimate the size of database size by using the estimate_only option.
While executing this command this it doesn·t perform any export

$expdp system/manager directory=dump logfile=f2.log estimate_only=y

Estimate:

ôc U can also use estimate ur schemas and what ever it can be done using

Estimate={blocks/statistics}

$expdp system/manager directory=dump dumpfile=f1.dmp logfile=f2.log

estimate={blocks/statistics} reuse_dumpfiles=y

Filtering of metadata:

]c content={all/data_only/metadata_only}

$>expdp system/manager directory=dump dumpfile=f1.dmp logfile=f1.log content={


all/data_only/metadata_only}

]c Now using include and exclude:

$cg 
g   gg !""
g g"#$%&''%&
(cexpdp scott/tiger DUMPFILE=dump:file%U.dmp
schemas=SCO exclude=ABLE:\"='EMP'\",

Compression:

]c Compression={none/metadata_only} by default metadata_only

$ expdp system/manager directory=dump dumpfile=f1.dmp


content=metadata_only logfile=f2.log schemas=scott
compression={none|metadata_only} reuse_dumpfiles=y

Filesize:

]c U can also determine filesize for dumpfile in bytes,megabytes,gigabytes

$> expdp system/manager directory=dump dumpfile=f1.dmp


content=metadata_only logfile=f2.log schemas=scott
compression={none|metadata_only} reuse_dumpfiles=y filesize=1024mb

Job_name:
u can specify u own job name when u perform datapump export

$> expdp system/manager directory=dump dumpfile=f1.dmp logfile=f1.log


schemas=scott job_name=scott

Status:

u can know the status of a running job at regular intervals;

expdp system/manager status=60

Attach:

U can attach ur datapump client session to a running job and places u in interactive
mode

$expdp system/manager attach=<job_name>

Add_file:

))*g+, gg,- *
 gg-  g -

export>add_filec=data_dump_dir:f2.dmp

start_job:

Once you finish adding space to the export directory, you use the interactive
command SAR_JOB to continue the stopped export job, as shown here:

export> SAR_JOB

Continue_client:

o resume the logging of the output on your screen, you issue the
CONINUE_CLIEN command, as shown here:
export> CONINUE_CLIEN

Kill_job:

U can a job that is running


Export>kill_job
Ctrl+c:

U can u ctrl+c to enter into interative mode,it just looks like


Export>stop_job

If don·t specified a dump file it uses default dump file expdat.dmp as traditional exp
and import does.

Under import u also filter the metadata, remap_tablespace, remap_datafile, network


mode operation, tables, tablespaces.

Interactive Data Pump Export Commands

Command Description
ADD_FILE Adds a dump file to the dump file set.
CONINUE_CLIEN Returns to logging mode. he job will be restarted if it was
idle.
EXI_CLIEN Quits the client session and leaves the job running.
ELP Provides summaries of the usage of the interactive
commands.
KILL_JOB Detaches and deletes the job.
PARALLEL Changes the number of active workers for the current job.
SAR_JOB Starts or resumes the current job.
SAUS Sets the frequency of job monitoring (in seconds). he
Default status is zero.
SOP_JOB Performs an orderly shutdown of the job execution and exits

the client

You might also like