Professional Documents
Culture Documents
Goal
RDBMS version 10g offers a new and improved tool for diagnosing Database
Perfromance issues. It is the Automated WorkLoad Repository (AWR).
However, there are still a number of customers using statistics package (statspack)
intially introduced in RDBMS version 8.1.
The goal of this document is to further assist customers/engineers when installing
and using the database performance tool Statspack.
During install of the RBBMS product, Oracle stores a document entitled spdoc.txt.
The spdoc.txt file will be located in the following directory upon successful install of
the RDBMS product 8.1.7 or higher: $ORACLE_HOME/rdbms/admin/.
The StatsPack README files (spdoc.txt) include specific updated information, and
history on this tool as well as platform and release specific information that will help
when installing and using this product.
Please find below spdoc.txt for version 10.2 in it's entirety to help guide you through
installation, and the most common issues you may encounter while running
statspack.
Information in this document will help you with all versions of RDBMS statspack
product.
However, Oracle still suggests you go to your
$ORACLE_HOME/rdbms/admin/spdoc.txt to reference your statspack platform and
version specific information on running statspack reports (i.e section 4 below).
Solution
-----------------------------------------------------------------------
Oracle10g Server
Release 10.2
Production
-------------------------------------------------------------------------
-------------------------------------------------------------------------
TABLE OF CONTENTS
-----------------
- Statspack collects more data, including high resource SQL (and the optimizer
execution plans for those statements)
- Statspack separates the data collection from the report generation. Data is
collected when a 'snapshot' is taken; viewing the data collected is in the hands of the
performance engineer when he/she runs the performance report
NOTE: The term 'snapshot' is used to denote a set of statistics gathered at a single
time, identified by a unique Id which includes the snapshot number (or snap_id).
This term should not be confused with Oracle's Snapshot Replication technology.
Statspack is a set of SQL, PL/SQL and SQL*Plus scripts which allow the collection,
automation, storage and viewing of performance data. A user is automatically
created by the installation script - this user, PERFSTAT, owns all objects needed by
this package. This user is granted limited query-only privileges on the V$views
required for performance tuning.
Statspack users will become familiar with the concept of a 'snapshot'. 'snapshot' is
the term used to identify a single collection of performance data. Each snapshot
taken is identified by a 'snapshot id' which is a unique number generated at the time
the snapshot is taken; each time a new collection is taken, a new snap_id is
generated.
The snap_id, along with the database identifier (dbid) and instance number
(instance_number) comprise the unique key for a snapshot (using this unique
combination allows storage of multiple instances of a Clustered database in the same
tables).
Once snapshots are taken, it is possible to run the performance report. The
performance report will prompt for the two snapshot id's the report will process. The
report produced calculates the activity on the instance between the two snapshot
periods specified, in a similar way to the BSTAT/ESTAT report; to compare - the first
snap_id supplied can be considered the equivalent of running BSTAT; the second
snap_id specified can be considered the equivalent of ESTAT. Unlike BSTAT/ESTAT
which can by its nature only compare two static data points, the report can compare
any two snapshots specified.
Enterprise Manager
------------------
Statspack allows you to capture Oracle instance-related performance data, and
report on this data in a textual format.
For EM managed databases in 9i, Oracle Enterprise Manager uses Statspack data and
displays it graphically. Starting with 10g, Enterprise Manager instead uses data
collected by the Automatic Workload Repository (AWR). AWR data is internally
captured and stored by Oracle 10g databases.
For more information about Oracle Enterprise Manager visit the Oracle website
oracle.com --> Database --> Manageability
The AWR schema was initially based on the Statspack schema, but has since been
modified. Because of this shared history, there are some similarities (e.g. concept of
a snapshot, similar base tables). However, AWR is separate from Statspack.
For more information on using AWR, please see the Oracle 10g Server Performance
Tuning Guide. For license information regarding AWR, please see the Oracle database
Licensing Information Manual.
If you are going to use AWR instead of Statspack, and you have been using
Statspack at your site, it is recommended that you continue to capture Statspack
data for a short time (e.g. one month) after the upgrade to 10g. This is because
comparing post-upgrade Statspack data to pre-upgrade Statspack data may make
diagnosing initial upgrade problems easier to detect.
Long term, typically, there is little reason to collect data through both AWR and
Statspack. If you choose to use AWR instead of Statspack, you should ensure you
should keep a representative set of baselined Statspack data for future reference.
2. Statspack Configuration
------------------------------
The amount of database space required by the package will vary considerably based
on the frequency of snapshots, the size of the database and instance, and the
amount of data collected (which is configurable).
Space Requirements
------------------
The default initial and next extent sizes are 100k, 1MB, 3MB or 5MB for all Statspack
tables and indexes. To install Statspack, the minimum space requirement is
approximately 100MB. However, the amount of space actually allocated will depend
on the Tablespace storage characteristics of the tablespace Statspack is installed in
(for example, if your minimum
extent size is 10m, then the storage requirement will be considerably more than
100m).
During the installation you will be prompted for the PERFSTAT user's password and
default and temporary tablespaces.
The default tablespace will be used to create all Statspack objects (such as tables
and indexes). Oracle recommend using the
SYSAUX tablespace for the PERFSTAT user's default tablespace; the SYSAUX
tablespace will be the tablespace defaulted during the installation, if no other is
specified.
A temporary tablespace is used for workarea activities, such as sorting (for more
information on temporary tablespaces, see
the Oracle10g Concepts Manual). The Statspack user's temporary tablespace will be
set to the database's default temporary tablespace by the installation, if no other
temporary tablespace is specified.
NOTE:
o A password for PERFSTAT user is mandatory and there is no default password; if a
password is not specified, the installation will abort with an error indicating this is the
problem.
o Do not specify the SYSTEM tablespace for the PERFSTAT users DEFAULT or
TEMPORARY tablespaces; if SYSTEM is specified the installation will terminate with an
error indicating this is the problem. This is enforced as Oracle does not recommend
using the SYSTEM tablespace to store statistics data, nor for workareas. Use the
SYSAUX (or a TOOLS) tablespace to store the data, and your instance's TEMPORARY
tablespace for workareas.
To run the installation script, you must use SQL*Plus and connect as a user with
SYSDBA privilege.
The spcreate install script runs 3 other scripts - you do not need to run these - these
scripts are called automatically:
1. spcusr -> creates the user and grants privileges
2. spctab -> creates the tables
3. spcpkg -> creates the package
Check each of the three output files produced (spcusr.lis, spctab.lis, spcpkg.lis) by
the installation to ensure no errors were encountered, before continuing on to the
next step.
Note that there are two ways to install Statspack - interactively (as shown above), or
in 'batch' mode; batch mode is useful when you do not wish to be prompted for the
PERFSTAT user's password, and default and temporary tablespaces.
e.g.
on Unix:
To install Statspack after receiving errors during the installation To correctly install
Statspack after such errors, first run the
de-install script, then the install script. Both scripts must be run from SQL*Plus.
SQL> @spdrop
SQL> @spcreate
Note: In a Clustered database environment, you must connect to the instance you
wish to collect data for.
This will store the current values for the performance statistics in the Statspack
tables, and can be used as a baseline snapshot
for comparison with another snapshot taken at a later time.
The default level of data collection is level 5. It is possible to change the amount of
data captured by changing the snapshot level, and the default thresholds used by
Statspack. For information on how to do this, please see the 'Configuring the amount
of data captured' section of this file.
Typically, in the situation where you would like to automate the gathering and
reporting phases (such as during a benchmark), you may need to know the snap_id
of the snapshot just taken. To take a snapshot and display the snap_id, call the
statspack.snap function. Below is an example of calling the snap function using an
anonymous PL/SQL block in SQL*Plus:
e.g.
SQL> variable snap number;
SQL> begin :snap := statspack.snap; end;
2/
PL/SQL procedure successfully completed.
SQL> print snap
SNAP
----------
12
To be able to make comparisons of performance from one day, week or year to the
next, there must be multiple snapshots taken over a period of time.
The best method to gather snapshots is to automate the collection on a regular time
interval. It is possible to do this:
- within the database, using the Oracle dbms_job procedure to schedule the
snapshots
- using Operating System utilities. On Unix systems, you could use utilities such as
'cron' or 'at'. On Windows, you could schedule a task (e.g. via Start> Programs>
Accessories> System Tools> Scheduled Tasks).
To use an Oracle-automated method for collecting statistics, you can use dbms_job.
A sample script on how to do this is supplied in spauto.sql, which schedules a
snapshot every hour, on the hour.
You may wish to schedule snapshots at regular times each day to reflect your
system's OLTP and/or batch peak loads. For example take snapshots at 9am, 10am,
11am, 12 midday and 6pm for the OLTP load, then a snapshot at
12 midnight and another at 6am for the batch window.
execute dbms_job.interval(1,'SYSDATE+(1/48)');
Where 'SYSDATE+(1/48)' will result in the statistics being gathered each 1/48th of a
day (i.e. every 30 minutes).
For more information on dbms_job, see the Supplied Packages Reference Manual.
There are two reports available - an Instance report, and a SQL report:
Note: spreport.sql calls sprepins.sql, first defaulting the dbid and instance number of
the instance you are connected to. For more information on the difference between
sprepins and spreport, see the 'Running the instance report when there are multiple
instances' section of this document.
- The SQL report (sprepsql.sql and sprsqins.sql) is a report for a specific SQL
statement. The SQL report is usually
run after examining the high-load SQL sections of the instance health report. The
SQL report provides detailed statistics and data for a single SQL statement (as
identified by the Hash Value).
Note: sprepsql.sql calls sprsqins.sql, first defaulting the dbid and instance number of
the instance you are connected to. For more information on the difference between
sprsqins and sprepsql, see the 'Running the SQL report when there are multiple
instances' section of this document.
Both reports prompt for the beginning snapshot id, the ending snapshot id, and the
report name. The SQL report additionally requests the Hash Value for the SQL
statement to be reported on.
Note: It is not correct to specify begin and end snapshots where the begin snapshot
and end snapshot were taken from different instance startups. In other words, the
instance must not have been shutdown between the times that the begin and end
snapshots were taken.
Both the snapshot level, and the thresholds specified will affect the amount of data
Statspack captures.
The higher the snapshot level, the more data is gathered. The default level set by
the installation is level 5.
For typical usage, level 5 snapshot is effective on most sites. There are certain
situations when using a level 6 snapshot is beneficial, such as when taking a
baseline.
The events listed below are a subset of events which should prompt taking a new
baseline, using level 6:
- when taking the first snapshots
- when a new application is installed, or an application is modified/upgraded
- after gathering optimizer statistics
- before and after upgrading
The various levels are explained in detail 'Snapshot Levels - details' section of this
document.
There are other parameters which can be configured in addition to the snapshot
level.
These parameters are used as thresholds when collecting data on SQL statements;
data will be captured on any SQL statements that breach the specified thresholds.
Snapshot level and threshold information used by the package is stored in the
stats$statspack_parameter table.
5.3. Changing the default values for Snapshot Level and SQL Thresholds
If you wish to, you can change the default parameters used for taking snapshots, so
that they are tailored to the instance's workload.
The full list of parameters which can be passed into the modify_statspack_parameter
procedure are the same as those for the
snap procedure. These are listed in the 'Input Parameters for the SNAP and
MODIFY_STATSPACK_PARAMETERS procedures' section of this document.
e.g. Take a single level 6 snapshot (do not save level 6 as the default):
o Taking a snapshot, and specifying the new defaults to be saved to the database
(using statspack.snap, and using the i_modify_parameter input variable).
Setting the i_modify_parameter value to true will save the new thresholds in the
stats$statspack_parameter table; these thresholds will be used for all subsequent
snapshots.
If the i_modify_parameter was set to false or if it were omitted, the new parameter
values would not be saved. Only the snapshot taken at that point will use the
specified values, any subsequent snapshots will use the preexisting values in the
stats$statspack_parameter table.
This procedure changes the values permanently, but does not take a snapshot.
In a level 5 snapshot (or above), note that the time required for the snapshot to
complete is dependent on the shared_pool_size and on the number of SQL
statements in the shared pool at the time the snapshot is taken: the larger the
shared pool, the longer the time taken to complete the snapshot.
SQL 'Thresholds'
The SQL statements gathered by Statspack are those which exceed one of six
predefined threshold parameters:
- number of executions of the SQL statement (default 100)
The values of each of these threshold parameters are used when deciding which SQL
statements to collect - if a SQL statement's resource usage exceeds any one of the
above threshold values, it is captured during the snapshot.
The SQL threshold levels used are either those stored in the table
stats$statspack_parameter, or by the thresholds specified when the snapshot is
taken.
Levels >= 6 Additional data: SQL Plans and SQL Plan usage
This level includes all statistics gathered in the lower level(s),
and additionally gathers optimizer execution plans, and plan usage data for each of
the high resource usage SQL statements captured.
To capture the plan for a SQL statement, the statement must be in the shared pool
at the time the snapshot is taken, and must exceed one of the SQL thresholds. To
gather plans for all statements in the shared pool, you can temporarily specify the
executions threshold (i_executions_th) to be zero (0) for those snapshots. For
information on how to do this, see the 'Changing the default values for Snapshot
Level and SQL Thresholds' section of this document.
A level 7 snapshot captures Segment-level statistics for segments which are heavily
accessed or heavily contended for.
There are many uses for segment-specific statistics. Below are three examples:
- The statistics relating to physical reads and writes can help you decide to modify
the physical layout of some segments (or of the tablespaces they reside in). For
example, to better spread the segment IO load, you can add files residing on
different disks to a tablespace storing a heavily accessed segment, or you can
(re)partition a segment.
- High numbers of ITL waits for a specific segment may indicate a need to change
segment storage attributes such as PCTFREE and/or INITRANS.
- In a Real Application Clusters database, global cache statistics make it easy to spot
the segments responsible for much of the
cross-instance traffic.
Although Statspack captures all segment statistics, it only displays the following
statistics in the Instance report:
- logical reads
- physical reads
- buffer busy waits
- ITL waits
- row lock waits
- global cache cr blocks served *
- global cache current blocks served *
The values of each of these thresholds are used when deciding which segments to
collect statistics for. If any segment's statistic value exceeds its corresponding
threshold value, all statistics for this segment are captured.
The threshold levels used are either those stored in the table
stats$statspack_parameter, or by the thresholds specified when
the snapshot is taken.
If you would like to gather session statistics and wait events for a particular session
(in addition to the instance statistics and wait events), it is possible to specify the
session id in the call to Statspack. The statistics gathered for the session will include
session statistics, session events and lock activity. The default behaviour is to not to
gather session level statistics.
Note that in order for session statistics to be included in the report output, the
session's serial number (serial#) must be the same in the begin and end snapshot. If
the serial numbers differ, it means the session is not the same session, so it is not
valid to generate session statistics. If the serial numbers differ, the following warning
will appear (after the begin/end snapshot has been entered by the user) to signal the
session statistics cannot be printed:
Range of Default
Parameter Name Valid Values Value Meaning
---------------------------------------------------
i_snap_level 0,5,6,7,10 5 Snapshot Level
i_ucomment Text <blank> Comment to be
stored with Snapshot
i_executions_th Integer >=0 100 SQL Threshold:
number of times the statement was executed
i_disk_reads_th Integer >=0 1,000 SQL Threshold:
number of disk reads the statement made
i_parse_calls_th Integer >=0 1,000 SQL Threshold:
number of parse
calls the statement made
i_buffer_gets_th Integer >=0 10,000 SQL Threshold: number of buffer
gets the statement made
i_sharable_mem_th Integer >=0 1048576 SQL Threshold: amount of sharable
memory
i_version_count_th Integer >=0 20 SQL Threshold: number of versions
of a SQL statement
i_seg_phy_reads_th Integer >=0 1,000 Segment statistic Threshold: number
of physical reads on a segment.
i_seg_log_reads_th Integer >=0 1,0000 Segment statistic Threshold: number
of logical reads on a segment.
i_seg_buff_busy_th Integer >=0 100 Segment statistic Threshold: number
of buffer busy waits for a segment.
i_seg_rowlock_w_th Integer >=0 100 Segment statistic Threshold: number
of row lock waits for a segment.
i_seg_itl_waits_th Integer >=0 100 Segment statistic Threshold: number
of ITL waits for a segment.
i_seg_cr_bks_sd_th Integer >=0 1000 Segment statistic Threshold: number
of Consistent Reads blocks served by
the instance for the segment*.
i_seg_cu_bks_sd_th Integer >=0 1000 Segment statistic Threshold: number
of CUrrent blocks served by the
instance for the segment*.
i_session_id Valid sid 0 (no Session Id of the Oracle Session
from session) to capture session granular
v$session statistics for
i_modify_parameter True,False False Save the parameters specified for
future snapshots?
6. Time Units used for Performance Statistics
--------------------------------------------------
Oracle now supports capturing certain performance data with millisecond and
microsecond granularity.
For clarity, the time units used are specified in the column headings of
each timed column in the Statspack report. The convention used is:
(s) - a second
(cs) - a centisecond - which is 100th of a second
(ms) - a millisecond - which is 1,000th of a second
(us) - a microsecond - which is 1,000,000th of a second
7. Event Timings
-----------------
If timings are available, the Statspack report will order wait events by time
(in the Top-5 and background and foreground Wait Events sections).
NOTE: Statspack baseline does not perform any consistency checks on the
snapshots requested to be baselined (e.g. it does not check whether
the specified baselines span an instance shutdown). Instead, the
baseline feature merely marks Snapshot rows as worthy of keeping,
while other data can be purged.
New procedures and functions have been added to the Statspack package to
make and clear baselines: MAKE_BASELINE, and CLEAR_BASELINE. Both of these
are able to accept varying parameters (e.g. snap Ids, or dates, etc), and
can be called either as a procedure, or as a function (the function returns
the number of rows operated on, whereas the procedure does not).
A begin and end snap Id pair can be specified. In this case, you choose
either to baseline the range of snapshots between the begin and end
snapshot pair, or just the two snapshots. The default is to baseline
the entire range of snapshots.
A begin and end date pair can be specified. All snapshots which fall in
the date range specified will be marked as baseline data.
Procedure or Function
---------------------
It is possible to call either the MAKE_BASELINE procedure, or the
MAKE_BASELINE function. The only difference is the MAKE_BASELINE function
returns the number of snapshots baselined, whereas the MAKE_BASELINE
procedure does not.
Similarly, the CLEAR_BASELINE procedure performs the same task as the
CLEAR_BASELINE function, however the function returns the number of
baselined snapshots which were cleared (i.e. no longer identified as
baselines).
8.1.1. Input Parameters for the MAKE_BASELINE and CLEAR_BASELINE
procedure and function which accept Begin and End Snap Ids
This section describes the input parameters for the MAKE_BASELINE and
CLEAR_BASELINE procedure and function which accept Snap Ids. The input
parameters for both MAKE and CLEAR baseline are identical. The
procedures/functions will either baseline (or clear the baseline for) the
range of snapshots between the begin and end snap Ids identified (the
default), or if i_snap_range parameter is FALSE, will only operate on
the two snapshots specified.
If the function is called, it will return the number of snapshots
operated on.
Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_snap Any Valid Snap Id - SnapId to start the baseline at
i_end_snap Any valid Snap Id - SnapId to end the baseline at
i_snap_range TRUE/FALSE TRUE Should the range of snapshots
between the begin and end snap
be included?
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema
Example 1:
To make a baseline of snaps 45 and 50 including the range of snapshots
in between (and you do not wish to know the number of snapshots
baselined, so call the MAKE_BASELINE procedure). Log into the PERFSTAT
user in SQL*Plus, and:
Example 2:
To make a baseline of snaps 1237 and 1241 (including the range of
snapshots in between), and be informed of the number of snapshots
baselined (by calling the function), log into the PERFSTAT
user in SQL*Plus, and:
Example 3:
To make a baseline of only snapshots 1237 and 1241 (excluding the
snapshots in between), log into the PERFSTAT user in SQL*Plus,
and:
The input parameters for the MAKE_BASELINE and CLEAR_BASELINE procedure and
function which accept begin and end dates are identical. The procedures/
functions will either baseline (or clear the baseline for) all snapshots
which were taken between the begin and end dates identified.
Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_date Any valid date - Date to start the baseline at
i_end_date Any valid date > - Date to end baseline at
begin date
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema
Example 1:
To make a baseline of snapshots taken between 12-Feb-2003 at 9am, and
12-Feb-2003 at 12 midday (and be informed of the number of snapshots
affected), call the MAKE_BASELINE function. Log into the PERFSTAT
user in SQL*Plus, and:
Example 2:
To clear an existing baseline which covers the times 13-Dec-2002 at
11pm and 14-Dec-2002 at 2am (without wanting to know how many
snapshots were affected), log into the PERFSTAT user in SQL*Plus, and:
SQL> exec statspack.clear_baseline -
(to_date('13-DEC-2002 23:00','DD-MON-YYYY HH24:MI'), -
to_date('14-FEB-2002 02:00','DD-MON-YYYY HH24:MI'));
It is possible to purge unnecessary data from the PERFSTAT schema using the
PURGE procedures/functions. Any Baselined snapshots will not be purged.
NOTE:
o It is good practice to ensure you have sufficient baselined snapshots
before purging data.
o It is recommended you export the schema as a backup before running this
script, either using your own export parameters, or those provided in
spuexp.par
o WARNING: It is no longer possible to rollback a requested purge operation.
o The functionality which was in the sppurge.sql SQL script has been moved
into the STATSPACK package. Moving the purge functionality into the
STATSPACK package has allowed significantly more flexibility in how
the data to be purged can be specified by the performance engineer.
A begin and end snap Id pair can be specified. In this case, you choose
either to purge the range of snapshots between the begin and end
snapshot pair (inclusive, which is the default), or just the two
snapshots specified.
The preexisting Statspack sppurge.sql SQL script has been modified to
use this PURGE procedure (which purges by begin/end snap Id range).
A begin and end date pair can be specified. All snapshots which were
taken between the begin and end date will be purged.
All snapshots which were taken before the specified date will be purged.
All snapshots which were taken N or more days prior to the current date
and time (i.e. SYSDATE) will be purged.
Extended Purge
--------------
In prior releases, Statspack identifier tables which contained SQL Text,
SQL Execution plans, and Segment identifiers were not purged.
Purging this data may be resource intensive, so you may choose to perform
an extended purge less frequently than the normal purge.
Procedure or Function
---------------------
Each of the purge procedures has a corresponding function. The function
performs the same task as the procedure, but returns the number of
Snapshot rows purged (whereas the procedure does not).
This section describes the input parameters for the PURGE procedure and
function which accept Snap Ids. The input parameters for both procedure
and function are identical. The procedure/function will purge all
snapshots between the begin and end snap Ids identified (inclusive, which
is the default), or if i_snap_range parameter is FALSE, will only purge
the two snapshots specified. If i_extended_purge is TRUE, an extended purge
is also performed.
If the function is called, it will return the number of snapshots purged.
Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_snap Any Valid Snap Id - SnapId to start purging from
i_end_snap Any valid Snap Id - SnapId to end purging at
i_snap_range TRUE/FALSE TRUE Should the range of snapshots
between the begin and end snap
be included?
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema
Example 1:
Purge all snapshots between the specified begin and end snap ids. Also
purge unused SQL Text, SQL Plans and Segment Identifiers, and
return the number of snapshots purged. Log into the PERFSTAT user
in SQL*Plus, and:
This section describes the input parameters for the PURGE procedure and
function which accept a begin date and an end date. The procedure/
function will purge all snapshots taken between the specified begin and
end dates. The input parameters for both procedure and function are
identical. If i_extended_purge is TRUE, an extended purge is also performed.
If the function is called, it will return the number of snapshots purged.
Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_begin_date Date - Date to start purging from
i_end_date End date > begin - Date to end purging at
date - SnapId to end the baseline at
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema
Example 1:
Purge all snapshots which fall between 01-Jan-2003 and 02-Jan-2003.
Also perform an extended purge. Log into the PERFSTAT user in
SQL*Plus, and:
This section describes the input parameters for the PURGE procedure and
function which accept a single date. The procedure/function will purge
all snapshots older than the date specified. If i_extended_purge is TRUE,
also perform an extended purge. The input parameters for both
procedure and function are identical.
If the function is called, it will return the number of snapshots purged.
Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_purge_before_date Date - Snapshots older than this date
will be purged
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged.
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema
Example 1:
To purge data older than a specified date, without wanting to know the
number of snapshots purged, log into the PERFSTAT user in SQL*Plus,
and:
This section describes the input parameters for the PURGE procedure and
function which accept the number of days of snapshots to keep. All data
older than the specified number of days will be purged. The input
parameters for both procedure and function are identical. If
i_extended_purge is TRUE, also perform an extended purge.
If the function is called, it will return the number of snapshots purged.
Range of Default
Parameter Name Valid Values Value Meaning
------------------ ----------------- ------- -------------------------------
i_num_days Number > 0 - Snapshots older than this
number of days will be purged
i_extended_purge TRUE/FALSE FALSE Determines whether unused
SQL Text, SQL Plans and
Segment Identifiers will be
purged in addition to the
normal data purged
i_dbid | Any valid DBId/ Current Caters for RAC databases
i_instance_number | inst number DBId/ where you may wish to baseline
combination Inst # snapshots on one instance
in this which were physically taken
Statspack on another instance
schema
Example 1:
To purge data older than 31 days, without wanting to know the number
of snapshots operated on, log into the PERFSTAT user in SQL*Plus, and:
When sppurge is run, the instance currently connected to, and the
available snapshots are displayed. The DBA is then prompted for the
low Snap Id and high Snap Id. All snapshots which fall within this
range will be purged.
WARNING: sppurge.sql has been modified to use the new Purge functionality
in the STATSPACK package, therefore it is no longer possible to
rollback a requested purge operation - the purge is automatically
committed.
e.g. Purging data - connect to PERFSTAT using SQL*Plus, then run the
sppurge.sql script - sample example output appears below.
Instance
DB Id DB Name Inst Num Name
----------- ---------- -------- ----------
720559826 PERF 1 perf
Base- Snap
Snap Id Snapshot Started line? Level Host Comment
-------- --------------------- ----- ----- --------------- --------------------
1 30 Feb 2000 10:00:01 6 perfhost
2 30 Feb 2000 12:00:06 Y 6 perfhost
3 01 Mar 2000 02:00:01 Y 6 perfhost
4 01 Mar 2000 06:00:01 6 perfhost
WARNING
~~~~~~~
sppurge.sql deletes all snapshots ranging between the lower and
upper bound Snapshot Id's specified, for the database instance
you are connected to. Snapshots identified as Baseline snapshots
which lie within the snapshot range will not be purged.
Deleting snapshots 1 - 2
e.g.
SQL> connect perfstat/perfstat_password
SQL> define losnapid=1
SQL> define hisnapid=2
SQL> @sppurge
If you run sptrunc.sql in error, the script allows you to exit before
beginning the truncate operation (you do this at the 'begin_or_exit'
prompt by typing in 'exit').