You are on page 1of 20

Tuning

1)Network
tnsping database check milliseconds
2)Application
check with application team regarding any changes of code in application
3)sql
check bad sql quries
use reports awr,addm.use sql tuning advisor
real time 90 % performance issues can be solved by sql tuning
4)object
The cost-based optimization approach uses statistics to calculate the selectivit
y of predicates and to estimate the cost of each execution plan. Selectivity is
the fraction of rows in a table that the SQL statement's predicate chooses. The
optimizer uses the selectivity of a predicate to estimate the cost of a particul
ar access method and to determine the optimal join order.
Statistics quantify the data distribution and storage characteristics of tables,
columns, indexes, and partitions. The optimizer uses these statistics to estima
te how much I/O and memory are required to execute a SQL statement using a parti
cular execution plan. The statistics are stored in the data dictionary, and they
can be exported from one database and imported into another (for example, to tr
ansfer production statistics to a test system to simulate the real environment,
even though the test system may only have small samples of data).
You must gather statistics on a regular basis to provide the optimizer with info
rmation about schema objects. New statistics should be gathered after a schema o
bject's data or structure are modified in ways that make the previous statistics
inaccurate. For example, after loading a significant number of rows into a tabl
e, you should collect new statistics on the number of rows. After updating data
in a table, you do not need to collect new statistics on the number of rows but
you might need new statistics on the average row length.
--->Last analysed data of a table/gathering statastics
select last_analysed from dba_tables where table_name='EMP';
for tuning and getting stastical data --->desc dba_tables
---->To analyse a table
sql>analyze table scott.emp compute statistics;
sql>exec dbms_stats.gather_table_stats('SCOTT','EMP');-->THIS IS BETTER OPTION
---->To analyse a table using estimate option
sql>analyze table scott.emp estimate statistics;
sql>exec dbms_stats.gather_table_stats('SCOTT','EMP',40);
-->analyzign 40% of table

---->To analyse a Index


sql>analyze index pk_emp compute statistics
sql>exec dbms_stats.gather_index_stats('SCOTT','PK_EMP');
---->To analyse a schema
sql>exec dbms_stats.gather_schema_stats('SCOTT');
---->To analyse full database(this will not consider base tables)
sql>exec dbms_stats.gather_database_stats;
---->To analyse base tables and dictionary views
sql>exec dbms_stats.gather_dictionary_stats;
sql>exec dbms_stats.gather_fixed_objects_stats;
System statistics do not change unless the workload significantly changes. As a
result, system
statistics do not need frequent adjustment. The DBMS_STATS.GATHER_SYSTEM_STATS
procedure will collect system statistics over a specified period, or you can sta
rt the gathering of
system statistics and make another call to stop gathering.
The NOWORKLOAD option takes a few minutes (depending on the size of the database
) and captures
estimates of I/O characteristics such as average read seek time and I/O transfer
rate.
sql>EXEC dbms_stats.gather_system_stats('NOWORKLOAD');
Tables are divided into 3 catagories
a)static tables(monthly analyze)
b)semi dynamic tables(weelky analyze)
c)dynamic tables(daily analyze)
optimizer works on 2 modes
a)rule based optimization(rbo)deprecated from 10g
b)cost based optimization(cbo)--how best less resources(memory & cpu) are used i
n less time
Role of the Oracle Optimizer
The optimizer is the part of the Oracle Database that creates the execution plan
for a SQL statement.
The determination of the execution plan is an important step in the processing o
f any SQL statement
and can greatly affect execution time.
The execution plan is a series of operations that are performed in sequence to e
xecute the statement.
The details of the various steps are shown in the Influencing the Optimizer less

on. The optimizer


considers many factors related to the objects referenced and the conditions spec
ified in the query.
The information needed by the optimizer includes:
Statistics gathered for the system (I/O, CPU, and so on) as well as schema objec
ts (number of
rows, index, and so on)
Information in the dictionary
WHERE clause qualifiers
Hints supplied by the developer
When you use diagnostic tools such as Enterprise Manager, EXPLAIN PLAN, and SQL*
Plus
AUTOTRACE, you can see the execution plan that the optimizer chooses.
Note: In Oracle Database 11g, the optimizer has two names based on its functiona
lity: the query
optimizer or run-time optimizer and the Automatic Tuning Optimizer.
The optimizer works in two modes, the first and usual mode is the run-time optim
izer that
creates the execution plan at run time. In this mode the optimizer is time limit
ed, it can only
consider a limited number of alternatives. The second mode is called the Automat
ic Tuning
Optimizer (ATO). In this mode the optimizer is given a much longer time to consi
der more
options and gather statistics. The ATO can produce a better plan and create a SQ
L profile that
will influence the optimizer to choose the better plan whenever the SQL statemen
t is submitted
in the future
Explain Plan
Plan which shows the flow of execution for sql statement.
checks wheather application using correct indexes.
sql>desc plan_table
to generate explain plan we require plan_table in sys schema.if not there we can
create using $ORACLE_HOME/rdbms/admin/utlxplan.sql
After creating plan_table use below command to generate explain plan
sql> grant select,insert on plan_table to scott;
sql>conn scott/tiger
sql>explain plan for select * from emp;
to view the content of explain plain run $ORACLE_HOME/rdbms/admin/utlxpls.sql sc
ript
SQL> select index_name from user_indexes where table_name='EMP';
why going for full table scan even though u have indexes on table emp?
\
because u are using * in statement i.e all rows in table.
if u select 10 rows in a table but it scans full table then performance issue ra
ises.

SQL>explain plan for select empno from emp where deptno=10;


to view the content of explain plain run $ORACLE_HOME/rdbms/admin/utlxpls.sql sc
ript
here it still goes for full table scan because deptno does not have index.
create indexes for columns used by where condition mainly.
create index dept_idx on emp.deptno
if u have 2/3 where columns then create composite index. like
create index comp_idx on emp(deptno,sal)
Type of index to be created
a)B-Tree index-used for high cardinality(no of distinct/unique values)column emp
no,primary
b)Bitmap index-for low cardinality columns,gender(m/f),DEPTNO
c)Function based index-for columns with defined functions
d) Reverse key index-for selecting latest data always
c,d less used a,b mostly used
Creating Indexes
Creating additional indexes can improve performance. To improve the performance
of individual
SQL statements consider using the following:
B*Tree indexes on the columns most used in the WHERE clause. Indexing a column f
requently used
in a join column can improve join performance. Do not use B*Tree indexes on colu
mns with only a
few distinct values, bitmap indexes are a better choice for these.
Composite B*Tree indexes: Consider composite indexes over columns that frequentl
y appear
together in the same WHERE clause. Composite indexes of several low selectivity
columns may
have a higher selectivity. Place the most frequently used column first in the in
dex order. For large
composite indexes, consider using a compressed index. Index key compression work
s well on
indexes with leading columns that have a few distinct values the leading, repeatin
g values are
eliminated. This reduces the size of the index and number of I/Os.
Bitmap indexes can help performance where the tables have a large number of rows
, the columns
have a few distinct values, and the WHERE clause contains multiple predicates us
ing these columns.

Do not use bitmap indexes on tables or columns that are frequently updated.
To select type of index on table
select index_name,index_type from user_indexes where table_name='EMP';
TO select indexes used on table
SQL>select index_name,column_name,column_position from user_ind_columns where ta
ble_name='EMP';
Still facing performance problem check if we are using right type of table
types of tables
a)general table-used regularly
b)cluster table-table which share common columns with other tables
c)Index organized table(IOT)-it avoids creating indexes separately as data itsel
f will be stored in index form.The Peformance of IOT is fast but DML and DDL ope
rations are very costly
d)Partition table-a normal table can be split logically into partitions so that
we can make quries to search only in 1 partition which improves search time.type
s of partition
a)Range
b)List
c)Hash
we can also have composite partion of following types
a)Range-range
b)Range-list
c)Range-hash
d)List-list
e)List-hash
f)List-range
5)Database Tuning
Fragmentation
a)High watermark(HWM) is the level of data represented in a table
b)generally oracle will not use space which is created by deleting some rows bec
ause high water mark will not be reset at that time.this create many unused free
spaces in the table which leads to fragmentation.
so data is scattered in differnet places of table so getting data from diff plac
es by oracle server is time taking so to defragment of space we need to exp/imp
table or move table to other tablespace.
To move a table to other tablespace/same tablespace
SQL>alter table emp move tablespace mydata;
the above command will create a duplicate table and copies the data,then drops t
he original table.
After table move the corresponding indexes will become unusable because the row

id's will change.we need to use any of the below commnads to rebuild the indexes
.
To check which indexes become unusable
SQL>select index_name,status from dba_indexes where table_name='EMP';
To rebuild the index
SQL>alter index pk_emp rebuild;
SQL>alter index pk_emp rebuild online;
SQL>alter index pk_emp rebuild online nologging;prefer this because no logging e
xecute fast
Dropping Indexes
Monitor index usage:
ALTER INDEX <index_name> MONITORING USAGE;
SELECT index_name, used, monitoring
FROM V$OBJECT_USAGE
WHERE index_name = '<index_name>'
The optimizer avoids using nonselective indexes within query execution, but all
indexes defined
against a table must be maintained. Index maintenance can present a significant
CPU and I/O
resource demand in any write-intensive application. In other words, do not build
indexes unless
necessary.
For best performance, drop indexes that an application is not using. You can fin
d indexes that are not
being used by using the ALTER INDEX MONITORING USAGE functionality over a period
of time
that is representative of your workload. This records whether or not an index ha
s been used in the
V$OBJECT_USAGE view. If you find that an index has not been used, then drop it.
Make sure that
you are monitoring a representative workload to avoid dropping an index that is
used by a workload
that you did not sample.
Also, indexes within an application sometimes have uses that are not immediately
apparent from a
survey of statement execution plans. An example of this is a foreign key index o
n a parent table,
which prevents share locks from being taken out on a child table.
You can test the impact of dropping an index by setting it to INVISIBLE. An invi
sible index still
exists and is maintained, but is not used by the optimizer. If the index is need
ed, use the ALTER
INDEX
VISIBLE command.
Test the impact with an invisible index:
ALTER INDEX <index_name> INVISIBLE

Invalid PL/SQL objects and unusable indexes have an impact on performance. Inval
id PL/SQL
object must be recompiled before they can be used.
Invalid PL/SQL objects: The current status of PL/SQL objects can be viewed by qu
erying the data
dictionary. You can find invalid PL/SQL objects with:
SELECT object_name, object_type FROM DBA_OBJECTS
WHERE status = 'INVALID';
Invalid PL/SQL objects can be manually recompiled by using Enterprise Manager or
through SQL
commands:
ALTER PROCEDURE HR.add_job_history COMPILE;
Manually recompiling PL/SQL packages requires two steps:
ALTER PACKAGE HR.maintainemp COMPILE;
ALTER PACKAGE HR.maintainemp COMPILE BODY;
Unusable indexes are made valid by rebuilding them to recalculate the pointers.
Rebuilding an
unusable index re-creates the index in a new location and then drops the unusabl
e index. This can be
done either by using Enterprise Manager or through SQL commands:
ALTER INDEX HR.emp_empid_pk REBUILD;
ALTER INDEX HR.emp_empid_pk REBUILD ONLINE;
ALTER INDEX HR.email REBUILD TABLESPACE USERS;
--->use export/import,expdp/impdp,movetable & shirk compact
Space can be returned to the tablespace from a segment with following commands:
ALTER TABLE
SHRINK SPACE;
TRUNCATE TABLE [DROP STORAGE];
ALTER TABLE
DEALLOCATE UNUSED;
To shrink a table
sql>alter table scott.emp enable row movement;
sql>alter table scott.emp shrink space compact;
sql>alter table scott.emp diasble row movement;
As rowid's doesn't change with above commands,it is necessary to rebuild indexes
.while doing shriking,still users can access the table,but it will use full scan
instead of index scan.
Apart from table fragmentation we have tablespace fragmentation and that will oc
cur only in DMT(DICTIONARY MANAGED) or LMT(LOCAL MANAGED) with manual segmentspa
ce management.The only solution is to export & import the objects in that tables
pace.so it is always prefered to use LMT with ASSM.
ROW CHAINING
a)If the data size is more than block size,data will spread into multiple blocks

forming a chain which is called row chaining.


b)so performance get degraded while getting data from multiple blocks.
c)solution for row chaining is to create new tablespace with non default block s
ize and moving the tables;
d)we can create tablespaces with non default blocksizes of 2k,4k,16k,32k(8k is d
efault).
To find row chaining
SQL>select table_name,chain_cnt from dba_tables where table_name='EMP';
TO create non default block size tablespace
SQL>create tablespace nontbs
datafile'/d01/oracle/oradata/orcl/nontbs.dbf'size
10m blocksize 16k;
more blocksize cannot be fitted into default database buffer cache,so it is requ
ired to enable separate buffer cache
To enable non-default buffer cache
SQL>alter system set db_16k_cache_size=100m scope=both;
once defined we can't change the default block size of the database;
first we need to allocate buffer cache in spfile and create tablespace.
ROW MIGRATION
a)Updating a row may increase rowsize and in such case it will use PCTFREE space
b)if PCTFREE is full but still a row requires moresize for update,oracle will mo
ve that entire row to another block
c)if many rows are moved like this more I/O should be performed to retrive data
which degrades performance
d)solution to avoid row migration is to increase th PCTFREE percentage from defa
ult 20% of block to 30-40%.creating non default block size also acts as a soluti
on.
e)because PCTFREE is managed automatically in LOCALY MANAGED TABLEPACE,we will n
ot observe any row migration in LMT.
Eliminating Migrated Rows
Export/import:
Export the table.
Drop or truncate the table.
Import the table.
MOVE table command:
ALTER TABLE EMPLOYEES MOVE
Online table redefinition
Copy migrated rows:
Find migrated rows by using ANALYZE.
Copy migrated rows to a new table.
Delete migrated rows from the original table.
Copy rows from the new table to the original table.
The DBA reduces block visits by:

Using a larger block size


Packing rows as closely as possible into blocks
Preventing row migration
Row migration occurs when rows are updated. If the updated row grows, and can no
longer fit in the
block, the row is moved to another block. A pointer is left in the original row
location, and it points
to the new location.
Unfortunately for the DBA, the last two goals conflict: As more data is packed i
nto a block, the
likelihood of migration increases
Small Block Size: Considerations
Advantages:
Reduces block contention
Is good for small rows
Is good for random access
Disadvantages:
Has a relatively large space overhead
Has a small number of rows per block
Can cause more index blocks to be read
Large Block Size: Considerations
Advantages:
Less space overhead
Good for sequential access
Good for very large rows
Better performance of index reads
Disadvantages:
Increases block contention
Uses more space in the buffer cache
Block Allocation
When an INSERT or UPDATE operation requires more
space, a block must be found with adequate space.
Two methods for space allocation:
Manual segment space management
Uses freelists
Automatic Segment Space Management (ASSM)
Uses bitmap blocks
Free Lists
Free list managed space characteristics:
Segment header blocks hold all free lists.
Blocks are added to and removed from the free lists.
Free lists are searched for available blocks.
Segment headers are pinned for the search and update of
free lists.
Automatic Segment Space Management
Automatic Segment Space Management (ASSM)
characteristics:
Space is managed with bitmap blocks (BMB).
Multiple processes search different BMBs.
Availability of a block is shown with a full bit.
The fullness is shown by a percentage-full bit for each of 25,
50, 75, and 100 percent used.

BMBs are organized in a tree hierarchy similar


doubly linked. The
maximum number of levels inside this hierarchy
archy represent the
space information for a set of contiguous data
. The BMB leaves
are the unit at which space has affinity to an
ronment.

to a B*Tree index with the levels


is three. The leaves of this hier
blocks that belong to the segment
instance in a multi-instance envi

Block Space Management with ASSM


If you execute a CREATE TABLE statement with PCTFREE set to 30 in an ASSM tables
pace, 20%
of each data block in the data segment of this table is reserved for updates to
the existing rows in
each block. The used space in the block can grow (1) until the row data and over
head total 80% of
the total block size. Then the block is marked as full, and the full bit is set
(2). After the full bit is set,
the block is not available for inserts. There are four other bits that indicate
100%, 75%, 50%, and
25% full. When the block reaches 80%, all the bits except the 100% bit have been
set and then the
full bit is set.
With ASSM-managed space, after the DELETE or UPDATE operation, the server proces
s checks
whether the space being used in the block is now less than a preset threshold. T
he threshold levels are
25, 50, 75, and 100 percent. If the used space is less than a threshold that is
lower than PCTFREE,
the block is marked as not full and available for inserts. In this example, the
used space must fall
below 75 percent to be marked as available for inserts (3).
After a data block is filled to the PCTFREE limit again (4), the server process
again considers the
block unavailable for the insertion of new rows until the percentage of that blo
ck falls below the
ASSM threshold
6)INSTANCE TUNING
if even after performing all the steps above still performance problem exists we
need instance level tuning.
SQL*Plus AUTOTRACE
In SQL*Plus, you can automatically obtain the execution plan and some additional
statistics about
the running of a SQL command by using the AUTOTRACE setting. Unlike the EXPLAIN
PLAN
command, the statement is actually run. However, you can choose to suppress the
display of the
statement results by specifying AUTOTRACE TRACEONLY EXPLAIN.
AUTOTRACE is an convenient diagnostic tool for SQL statement tuning. Because it
is purely
declarative, it is easier to use than EXPLAIN PLAN.
Command Options
OFF Disables autotracing SQL statements
ON Enables autotracing SQL statements
TRACEONLY Enables autotracing SQL statements and suppresses statement output

EXPLAIN Displays execution plans but does not display statistics


STATISTICS Displays statistics but does not display execution plans
Note: If both the EXPLAIN and STATISTICS command options are omitted, execution
plans and
statistics are displayed by default
To start tracing statements using AUTOTRACE:
set autotrace on
To hide statement output:
set autotrace traceonly
To display only execution plans:
set autotrace traceonly explain
Control the layout with column settings
set autotrace traceonly statistics
SELECT *
FROM products;
TKPROF report
Transient kernel profiler is a report which show details like timetaken,cpu util
ization in every phase(parse,execution and fetch) of sql execution.
SQL Trace Facility
If you are using Standard Edition or do not have the Diagnostics Pack, the SQL T
race facility and
TKPROF let you collect the statistics for SQL executions plans to compare perfor
mance. A good way
to compare two execution plans is to execute the statements and compare the stat
istics to see which
one performs better. SQL Trace writes its session statistics output to a file, a
nd you use TKPROF to
format it. You can use these tools along with EXPLAIN PLAN to get the best resul
ts.
SQL Trace facility:
Can be enabled for a session or for an instance
Reports on volume and time statistics for the parse, execute, and fetch phases
Produces output that can be formatted by TKPROF
When the SQL Trace facility is enabled for a session, the Oracle Database genera
tes a trace file
containing session statistics for traced SQL statements for that session. When t
he SQL Trace facility
is enabled for an instance, the Oracle Database creates trace files for all sess
ions.
Note: SQL Trace involves some overhead, so you usually do not want to enable SQL
Trace at the
instance level.
Steps to take TKPROF report
sql>grant alter session to scott;

sql>conn scott/tiger
sql>alter session set sql_trace=true;
sql>select * from emp;
sql>alter session set sql_trace=false;
cd /$ORACLE_BASE/diag/rdbms/orcl/trace
ls -ltr *ora*.trc
orcl_ora____.trc latest trace file
tkprof orcl_ora____.trc chait_tkprof_report
vi chait_tkprof_report
or
Enabling SQL Trace
For your current session:
SQL> EXEC dbms_monitor.session_trace_enable;
SQL> EXECUTE dbms_session.set_sql_trace(true);
For any session:
SQL> EXECUTE dbms_system.set_sql_trace_in_session
(session_id, serial_id, true);
For instance-wide tracing:
SQL> EXEC dbms_monitor.database_trace_enable();
Disabling SQL Trace
For your current session:
SQL> EXEC dbms_monitor.session_trace_disable;
SQL> EXECUTE dbms_session.set_sql_trace(false);
For any session:
SQL> EXECUTE dbms_system.set_sql_trace_in_session
(session_id, serial_id, flase);
For instance-wide tracing:
SQL> EXEC dbms_monitor.database_trace_disable()
By default, the .trc file is named after the SPID. You can find the SPID in V$PR
OCESS. An easier
way of finding the file is the following:
SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_FILE';
Then the trace file in TKPROF will include the 'MY_FILE' string.

a)from tkprof report if we observer that statement is getting parsed everytime a


nd if it is frequently executed query,reason could be statement flushing out fro
m shared pool because of less size.so increase shared pool size is the solution.
b)if we observe fetching is happening evertime it could be because of data flush
ing from buffer cache for which increasing the size is the solution.
c)if the size of database buffercache is enough to hold the data but still data
is flushing out in such casees we can use keep & recycle caches.
d)if execute is the problem increase pga

To enable keepcache & recycle cache


sql>alter system set db_keep_cache_size=50m scope=both;
sql>alter system set db_recycle_cache_size=50m scope=both;
To place table in keep or recycle caches
sql>alter table scott.emp storage(buffer_pool keep);
sql>alter table scott.emp storage(buffer_pool recycle);
if a table is placed in keep cache it will be there in the instance till its lif
e time without flushing.
if a table is placed in recycle cache it will be flushed immediately without wia
ting for LRU to occur.
frequntly used tables should be placed in keep cache whereas full scan tables sh
ould be palced in recycle cache.
STATPACK
a)It is a report which details database performance during a given period of tim
e
STEPS FOR GENERATING STATSPACK REPORT
sql>@$ORACLE_HOME/rdbms/admin/spcreate.sql
this will create a PERFSTAT user who is responsible for storing stastical data.
$sqlplus perfstat/perfstat
sql>exec stastpack.snap;--->begin time
sql>exec statspack.snap;--->end time
sql>@$ORACLE_HOME/rdbms/admin/spreport.sql
statspack report can have levels from 1 to 10 and the default is 5

ASH,ADDR,AWR
Automatic Workload Repository (AWR) is a builtin repository in every Oracle Data
base.At regular intervals, the database makes a snapshot of all its vital statis
tics and workload information and stores them in AWR. The Automatic Database Dia
gnostic Monitor (ADDM) analyzes the AWR data on a regular basis, then locates th
e root causes of performance problems, provides recommendations for correcting a
ny problems, and identifies nonproblem
areas of the system. Because AWR is a repository of historical performance data,
ADDM can be used to analyze performance issues after the event, often
saving time and resources reproducing a problem.
a) An ADDM Report (addmrpt.sql)
------------------------------The ADDM reporting utility creates a report of its database performance findings
. The addmrpt.sql script is found in the
$ORACLE_HOME/rdbms/admin directory. The output is written to the current working
directory as a text file.
SQL>@$ORACLE_HOME/rdbms/admin/addmrpt.sql
b) AWR Report (awrrpt.sql)
------------------------------The AWR reporting utility provides an overview of database performance within a
specified period of time. It essentially
computes the net change in database activity within the period. The awrrpt.sql s
cript is found in the
$ORACLE_HOME/rdbms/admin directory. The output is written to the current working
directory, please select HTML
format.
SQL>@$ORACLE_HOME/rdbms/admin/awrrpt.sql
c) ASH Report (ashrpt.sql)
---------------------------The ASH report utility is useful for determining the amount of active sessions,
what they were doing, and which SQL
statements were most active during a period of time. It is especially useful for
analyzing transient performance issues.
The ashrpt.sql script is found in the $ORACLE_HOME/rdbms/admin directory. The ou
tput is written to the current
working directory, please select HTML format.
SQL>@$ORACLE_HOME/rdbms/admin/ashrpt.sql
A SQL Tuning Set (STS) is a database object that includes one or more SQL statem
ents
along with their execution statistics and execution context, and could include a
user
priority ranking. The SQL statements can be loaded into a SQL Tuning Set from di
fferent
SQL sources, such as the Automatic Workload Repository, the cursor cache, or cus
tom SQL
provided by the user. An STS includes:
A set of SQL statements

Associated execution context, such as user schema, application module name and
action, list of bind values, and the cursor compilation environment
Associated basic execution statistics, such as elapsed time, CPU time, buffer ge
ts,
disk reads, rows processed, cursor fetches, the number of executions, the number
of complete executions, optimizer cost, and the command type
SQL statements can be filtered using the application module name and action, or
any of the
execution statistics. In addition, the SQL
statements can be ranked based on any
combination of execution statistics.
SQL Tuning Advisor---Used SQL Tuning Advisor to recommend improvements on SQL s
Manual SQL tuning is a complex process that presents many challenges. It require
s expertise in several
areas, is very time consuming, and requires an intimate knowledge of the schema
structures and the data
usage model of the application. All these factors make manual SQL tuning a chall
enging and resource
intensive task that is ultimately very expensive for businesses.
SQL Tuning Advisor is Oracle s answer to all the pitfalls and challenges of manual
SQL tuning. It
automates the SQL tuning process by comprehensively exploring all the possible w
ays of tuning a SQL
statement. The analysis and tuning is performed by the database engine s significa
ntly enhanced query
optimizer. Four types of analysis are performed by the SQL Tuning Advisor:
Statistics Analysis: The query optimizer needs up-to-date object statistics to g
enerate good
execution plans. In this analysis objects with stale or missing statistics are i
dentified and
appropriate recommendations are made to remedy the problem.
SQL Profiling: This feature, introduced in Oracle Database 10g, revolutionizes t
he approach to
SQL tuning. Traditional SQL tuning involves manual manipulation of application c
ode using
optimizer hints. SQL Profiling eliminates the need for this manual process and t
unes the SQL
statements without requiring any change to the application code. This ability to
tune SQL
without changing the application code also helps solve the problem of tuning pac
kaged
applications. Packaged application users now no longer need to log a bug with th
e application
vendor and wait for several weeks or months to obtain a code fix for tuning the
statement. With
SQL profiling the tuning process is automatic and immediate.
Access Path Analysis: Indexes can tremendously enhance performance of a SQL stat
ement by
reducing the need for full table scans. Effective indexing is, therefore, a comm
on tuning
technique. In this analysis new indexes that can be significantly enhance query
performance are
identified and recommended.
SQL Structure Analysis: Problems with the structure of SQL statements can lead t
o poor
performance. These could be syntactic, semantic, or design problems with the sta
tement. In this
analysis relevant suggestions are made to the restructure selected SQL statement

s for improved
performance.
The output of this analysis is in the form of recommendations, along with a rati
onale for each
recommendation and its expected performance benefit. The recommendation relates
to collection of
statistics on objects, creation of new indexes, restructuring of the SQL stateme
nt, or creation of a SQL
Profile. A user can choose to accept the recommendation to complete the tuning o
f the SQL statements.
SQL Access Advisor---Used SQL Access Advisor to recommend improvements on schema
structures
The design of the database schema can have a big impact on the overall applicati
on performance. SQL
Access Advisor, provides comprehensive advice on how to optimize schema design i
n order to maximize
application performance. SQL Access and SQL Tuning Advisors, together, provide a
complete solution for
tuning database applications. These two advisors automate all manual-tuning tech
niques currently
practiced and form the core of Oracle s automatic SQL tuning solution.
The SQL Access Advisor accepts input from all possible sources of interest, such
as the cursor cache, the
Automatic Workload Repository (AWR) any user-defined workload, and will even gen
erate a hypothetical
workload if a schema contains dimensions or primary/foreign key relationships. I
t comprehensively
analyzes the entire workload and provides recommendations to create new partitio
ns or indexes if
required, drop any unused indexes, create new materialized views and materialize
d view logs.
Determining the optimal partitioning or indexing strategy for a particular workl
oad is a complicated
process that requires expertise and time. SQL Access Advisor considers the cost
of insert/update/delete
operations in addition to the queries on the workload and makes appropriate reco
mmendations,
accompanied by a quantifiable measure of expected performance gain as well as sc
ripts needed to
implement the recommendations.
The SQL Access Advisor takes the mystery out of access design process. It tells
the user exactly what the
type of indexes, partitions, and materialized views are required to maximize app
lication performance. By
automating this very critical function, SQL Access Advisor obviates the need for
the error-prone, lengthy,
and expensive manual tuning process. It is fast, precise, easy-to-use and, toget
her with the SQL Tuning
Advisor, offers the most accurate and cost-effective solution for application pe
rformance tuning
SQL Performance Analyzer (SPA)
SPA is the tool of choice when you
erform
differently, when a change is made
statements
into a SQL tuning set from various
c Workload
Repository (AWR), and existing SQL

trying to identify SQL statements that will p


at the database or OS level. SPA captures SQL
sources, including the cursor cache, Automati
tuning sets (STS). The STS is analyzed by exe

cuting each
SQL statement in isolation. The order of execution depends on the order of the s
tatement in the
tuning set. The STS includes bind variable, execution plan, and execution contex
t information.
With SPA, you will execute the STS and capture performance statistics, make the
change to the
system and execute the STS again, and then compare. SPA does not consider the im
pact that
SQL statements can have on each other.
SQL Performance Analyzer: Process
1. Capture SQL workload on production.
2. Transport the SQL workload to a test system.
3. Build before-change performance data.
4. Make changes.
5. Build after-change performance data.
6. Compare results from steps 3 and 5.
7. Tune regressed SQL.
Note: 390374.1 Oracle Performance Diagnostic
Guide
Note: 210014.1 How to Log a Good
Performance Service Request, to guide you in logging a performance service reques
t (SR).
Note: 330363.1 Remote Diagnostic Agent (RDA) 4 FAQ).
a. What are the SQL statements and their associated number of executions where t
he CPU
time consumed is greater than 200,000 microseconds?
SQL> SELECT sql_text, executions
2 FROM v$sqlstats
3 WHERE cpu_time > 200000;
b. What sessions logged in from the EDRSR9P1 computer within the last day?
SQL> SELECT * FROM v$session
2 WHERE machine = 'EDRSR9P1' and
3 logon_time > SYSDATE - 1;
c. What are the session IDs of any sessions that are currently holding a lock th
at is blocking
another user, and how long has that lock been held? (block may be 1 or 0; 1 indi
cates
that this session is the blocker.)
SQL> SELECT sid, ctime
2 FROM v$lock WHERE block > 0;
--->Query V$STATISTICS_LEVEL to determine which other parameters are affected by
the STATISTICAL_LEVEL parameter.
SQL> select statistics_name, activation_level
2 from v$statistics_level
3 order by 2;

--->The full list of statistics can be found in the V$STATNAME view


--->The full list of wait events can be found in the V$EVENT_NAME view.
--->The statistic classes stored in the V$SESSTAT and V$SYSSTAT views
--->The servicelevel view includes the SERVICE_NAME, and the session level view
includes the SID (session identifier) column. These allow you to join to the V$S
ERVICE_NAME, and V$SESSION views.
Determine the sessions that consume more than 30,000 bytes of PGA memory.
SQL> SELECT username, name, value
2 FROM v$statname n, v$session s, v$sesstat t
3 WHERE s.sid=t.sid
4 AND n.statistic#=t.statistic#
5 AND s.type='USER'
6 AND s.username is not null
7 AND n.name='session pga memory'
8 AND t.value > 30000;
USERNAME NAME VALUE
---------- ------------------- ----------SYSTEM session pga memory 468816
--->The V$SGAINFO view provides the current size of the SGA components, the gran
ule size, and
free memory. A brief summary is presented in the V$SGA view. All calculated memo
ry
statistics are displayed in the V$SGASTAT view. You can query this view to find
cumulative
totals of detailed SGA usage since the instance started.
--->All wait events are named in the V$EVENT_NAME view, including:
Free buffer waits
Latch free
Buffer busy waits
Db file sequential read
Db file scattered read
Db file parallel write
Undo segment tx slot
Undo segment extension
--->Wait event statistics levels:
System
Service
Session
Wait event statistics columns vary by view.
V$SERVICE_EVENT
V$SYSTEM_EVENT
V$SESSION_EVENT
--->

Views that include microsecond timings:

V$SESSION_WAIT, V$SYSTEM_EVENT,

V$SERVICE_EVENT,
V$SESSION_EVENT (TIME_WAITED_MICRO column)
V$SQL, V$SQLAREA (CPU_TIME, ELAPSED_TIME
columns)
V$LATCH, V$LATCH_PARENT, V$LATCH_CHILDREN
(WAIT_TIME column)
V$SQL_WORKAREA, V$SQL_WORKAREA_ACTIVE
(ACTIVE_TIME column)
Views that include millisecond timings:
V$ENQUEUE_STAT (CUM_WAIT_TIME column)
--->AWR Snapshot Purging Policy
You control the amount of historical AWR statistics by setting a retention perio
d and a snapshot
interval. In general, snapshots are removed automatically in chronological order
. Snapshots that
belong to baselines are retained until their baselines are removed or expire. On
a typical system
with 10 active sessions, AWR collections require 200 MB to 300 MB of space if th
e data is kept
for seven days. The space consumption depends mainly on the number of active ses
sions in the
system. A sizing script, utlsyxsz.sql, includes factors such as the size of the
current
occupants of the SYSAUX tablespace, number of active sessions, frequency of snap
shots, and
retention time. The awrinfo.sql script produces a report of the estimated growth
rates of
various occupants of the SYSAUX tablespace. Both scripts are located in the
$ORACLE_HOME/rdbms/admin directory.
AWR handles space management for the snapshots. Every night the MMON process pur
ges
snapshots that are older than the retention period. If AWR detects that SYSAUX i
s out of space,
it automatically reuses the space occupied by the oldest set of snapshots by del
eting them. An
alert is then sent to the DBA to indicate that SYSAUX is under space pressure.
Setting an appropriate retention interval for your AWR is critical for proper da
ta retention, especially for predictive modeling. You can adjust the AWR retenti
on period according to your analysis needs.
In this example the retention period is specified as 3 years (1,576,800 minutes)
and the interval between each snapshot is 60 minutes.
execute dbms_workload_repository.modify_snapshot_settings (
interval => 60,
retention => 1576800);
To create a baseline

execute DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (
start_snap_id IN NUMBER,
end_snap_id IN NUMBER,
baseline_name IN VARCHAR2);
--->Performance views is another name for the dynamic performance views, or V$ (
v-dollar) views, that hold raw statistics in memory.
--->Trace files are very difficult to interpret until they have been formatted w
ith the tkprof utility. The trcsess utility gives a unique tool for combining an
d filtering trace files to extract the statistics for a single session, service,
or module across multiple trace files.

--->Fetch Phase
The Oracle Database retrieves rows for a SELECT statement during the fetch phase
. Each fetch
typically retrieves multiple rows, using an array fetch. Array fetches can impro
ve performance by
reduce network round trips. Each Oracle tool offers its own ways of influencing
the array size; For
example, in SQL*Plus, you can change the fetch size by using the ARRAYSIZE setti
ng:
SQL> show arraysize
arraysize 15
SQL> set arraysize 50
SQL*Plus processes 15 rows at a time by default. Very high array sizes provide l
ittle or no
advantage.

You might also like