Professional Documents
Culture Documents
cccccccccccccccccccccccccccccccccccccccccccccccccccc
ôc EXPDP AND IMPDP are used to take backups of a database
when database is online . It is also known as online backup .
ôc Datapump was introduced in oracle 10g as a new feature. It is superset to
previous export and import .
ôc Oracle datapump enables dba to transfer the data as well as metadata
much faster than the traditional export and import.
ôc ERE ARE REASONS W ERE ORACLE DAAPUMP IS FASER AN
RADIIONAL EXPOR AND IMPOR
ôc Oracle datapump uses parallel streams of data to maximize its
throughput .
ôc Oracle datapump uses datapump API·s to load and unload the data from
dumpfile or database.
ôc Oracle datapump fetches the data in terms of BLOCKS where as export
and import fetches data interms of BYES.
ôc In oracle datapump everything is done on server side, it utilizes
resources on the server to perform the job.
ôc In traditional export and import most of the work is done on client side.
MAIN FEAURES:
Ñc Reorginsation of database.
Ñc Upgradation of oracle database and oracle software upgradation.
Ñc U can attach and detach the job.
Ñc User interative commands.
Ñc U can know the job status
Ñc U can estimate the size of the database before u perform logical
backup of a database by estimate_only option(dumpfile should not
be specified).
Ñc U can kill the job or can stop the while is executing
Ñc Paralled execution of jobs.
When u perform datapump jobs, it uses the datapump API that are
1.DBMS_DAAPUMP:
With these package the clients can access both expdp and impdp.
2.DBMS_MEADAA:
With these metadata API we can select and extract appropriate metadata from the
database export and import. hese DBMS_MEADAA was available with oracle 9i.
Oracle datapump takes help of some processes in background to perform its job
faster.
When u perform export it usually select·s the data from the database(logical backup
of database) and while u import it usually performs the INSERS AND DDL operations
on the database.
>>COMPONENS OF E DAABASE:
Dumpfile is a file it unloads the data from the database into operating system
files. It also contains table data and metadata information stored in it .
Directory is an object the refers to dumpfile and logfile or location where dumpfile
and logfile resides.
Logfile is standard file where logging messages are written while u perform import
and export.
Sqlfile is special parameter file that write all DDL operations when import is
performed.
First oracle datapump contains a master table created at the when u perform
exp or imp. hese master table inturn creates a process called master process.
Actual name of the master process is MASER CONROL PROCESS. he master
process is in the form of <instance_name>_Dmnn_<pid> .
he master proeess creates a worker process to handle the request of jobs given
by client process. he client process handles the request to server process to
perform the intended jobs to execute. he server process establishes a session
irrespective of client process.
he master table creates the master process, these master process updates the
master table about status of the job.
1. Creates and manages the jobs and creates a master table to monitor the job.
As when export is in progress the master process divides the given job into number
of tasks. he master table with help of master process schedules these tasks in the
control queue. he worker process (DWnn) usually picks these tasks from the control
queue and executes. What ever completed tasks are enqueued in a status queue that
which monitors the jobs and master table gets updated by master process by
gathering the progress from the status queue.
When u import the database object the server process checks for information and
verifies the information present in the control table of dumpfile and then checks for
master table information, any how when u perform import the server process
fetches the data from the dumpfile and performs necessary INSERS AND DDL
pertaining to the database objects import by using the master table which has key
information to perform all these operations
o know what DDL operations had occurred u can do it by using SQLFILE when u
perform import.
Worker process:
Create by the master process, intended to work on behalf of the master process.
Shadow process:
his actual process for creating mastertable and master process. his process
terminates as soon as job is completed.
Client process:
his process establishes connection with the server process on server mode to
perform the intend task.
o see it u have open another terminal session connect to database and by using
following query u can see the master table when expdp is running.
An extra process is running at a database level when datapump jobs are runned
Related views:
dba_datapump_jobs,
dba_datapump_sessions;
v$session_longops
user_datapump_sessions
Advantages
In this u need a database link between source and remote database, by using the
dumpfile of the source database u can perform datapump export and import on a
remote database i.e., the dump file of source database can be used by remote
database for exports and imports without need to create a dump file on the remote
database
5.Filtering of metadata:
ere can filter the database objects as well as metadata of database by using
CONTENT:
1.ALL
2.DATAFILES_ONLY
3.METADATA_ONLY
What u want to include and exclude during the import and export of
database.
Direct loading uses the internal stream format that is same with the dumpfile.
Or otherwise it loads the data directly from datafiles without make using of the
database buffer caches and extract and format the data and writes to the
dumpfiles. While importing it reads the dumpfiles and contents , format writes them
directly to the datafiles without using the database buffer cache. While loading the
data it uses the concept of the high water mark and when load of data is completed
it adjusts the high water mark after writing to the blocks. During this operation of
importing and exporting it keeps SGA at minimum without effecting the end users .
External loadings:
During loading or unloading it uses the database buffer cache to extract the data
from database. While importing it uses the database buffer cache for loading the
data into datafiles and its also uses the SGA for executing and to perform complex
inserting and DDL tasks during loading.
EXAMPLES:
OR $ORACLE_ OME/ADMIN/LOG
ôc Only SYSEM and SYSDBA can use default directory or user with sysdba or
system priviledge can use the default directory.
ôc $>export DAA_PUMP_DIR=DUMP
c Even users can perform datapump expdp and impdp if they granted with
following options;
SQL>conn / as sysdba
able-level:
Remap_schema:
c And also remap the schema of one user say u1 to other user assume u2
remap_schema=u1:u2
Estimate_only:
ôc U can estimate the size of database size by using the estimate_only option.
While executing this command this it doesn·t perform any export
Estimate:
ôc U can also use estimate ur schemas and what ever it can be done using
Estimate={blocks/statistics}
estimate={blocks/statistics} reuse_dumpfiles=y
Filtering of metadata:
]c content={all/data_only/metadata_only}
$cg
g
gg !""
g g"#$%&''%&
(cexpdp scott/tiger DUMPFILE=dump:file%U.dmp
schemas=SCO exclude=ABLE:\"='EMP'\",
Compression:
Filesize:
Job_name:
u can specify u own job name when u perform datapump export
Status:
Attach:
U can attach ur datapump client session to a running job and places u in interactive
mode
Add_file:
))*g+,gg,- *
gg- g-
export>add_filec=data_dump_dir:f2.dmp
start_job:
Once you finish adding space to the export directory, you use the interactive
command SAR_JOB to continue the stopped export job, as shown here:
export> SAR_JOB
Continue_client:
o resume the logging of the output on your screen, you issue the
CONINUE_CLIEN command, as shown here:
export> CONINUE_CLIEN
Kill_job:
If don·t specified a dump file it uses default dump file expdat.dmp as traditional exp
and import does.
Command Description
ADD_FILE Adds a dump file to the dump file set.
CONINUE_CLIEN Returns to logging mode. he job will be restarted if it was
idle.
EXI_CLIEN Quits the client session and leaves the job running.
ELP Provides summaries of the usage of the interactive
commands.
KILL_JOB Detaches and deletes the job.
PARALLEL Changes the number of active workers for the current job.
SAR_JOB Starts or resumes the current job.
SAUS Sets the frequency of job monitoring (in seconds). he
Default status is zero.
SOP_JOB Performs an orderly shutdown of the job execution and exits
the client