Professional Documents
Culture Documents
1)Network
tnsping database check milliseconds
2)Application
check with application team regarding any changes of code in application
3)sql
check bad sql quries
use reports awr,addm.use sql tuning advisor
real time 90 % performance issues can be solved by sql tuning
4)object
The cost-based optimization approach uses statistics to calculate the selectivit
y of predicates and to estimate the cost of each execution plan. Selectivity is
the fraction of rows in a table that the SQL statement's predicate chooses. The
optimizer uses the selectivity of a predicate to estimate the cost of a particul
ar access method and to determine the optimal join order.
Statistics quantify the data distribution and storage characteristics of tables,
columns, indexes, and partitions. The optimizer uses these statistics to estima
te how much I/O and memory are required to execute a SQL statement using a parti
cular execution plan. The statistics are stored in the data dictionary, and they
can be exported from one database and imported into another (for example, to tr
ansfer production statistics to a test system to simulate the real environment,
even though the test system may only have small samples of data).
You must gather statistics on a regular basis to provide the optimizer with info
rmation about schema objects. New statistics should be gathered after a schema o
bject's data or structure are modified in ways that make the previous statistics
inaccurate. For example, after loading a significant number of rows into a tabl
e, you should collect new statistics on the number of rows. After updating data
in a table, you do not need to collect new statistics on the number of rows but
you might need new statistics on the average row length.
--->Last analysed data of a table/gathering statastics
select last_analysed from dba_tables where table_name='EMP';
for tuning and getting stastical data --->desc dba_tables
---->To analyse a table
sql>analyze table scott.emp compute statistics;
sql>exec dbms_stats.gather_table_stats('SCOTT','EMP');-->THIS IS BETTER OPTION
---->To analyse a table using estimate option
sql>analyze table scott.emp estimate statistics;
sql>exec dbms_stats.gather_table_stats('SCOTT','EMP',40);
-->analyzign 40% of table
Do not use bitmap indexes on tables or columns that are frequently updated.
To select type of index on table
select index_name,index_type from user_indexes where table_name='EMP';
TO select indexes used on table
SQL>select index_name,column_name,column_position from user_ind_columns where ta
ble_name='EMP';
Still facing performance problem check if we are using right type of table
types of tables
a)general table-used regularly
b)cluster table-table which share common columns with other tables
c)Index organized table(IOT)-it avoids creating indexes separately as data itsel
f will be stored in index form.The Peformance of IOT is fast but DML and DDL ope
rations are very costly
d)Partition table-a normal table can be split logically into partitions so that
we can make quries to search only in 1 partition which improves search time.type
s of partition
a)Range
b)List
c)Hash
we can also have composite partion of following types
a)Range-range
b)Range-list
c)Range-hash
d)List-list
e)List-hash
f)List-range
5)Database Tuning
Fragmentation
a)High watermark(HWM) is the level of data represented in a table
b)generally oracle will not use space which is created by deleting some rows bec
ause high water mark will not be reset at that time.this create many unused free
spaces in the table which leads to fragmentation.
so data is scattered in differnet places of table so getting data from diff plac
es by oracle server is time taking so to defragment of space we need to exp/imp
table or move table to other tablespace.
To move a table to other tablespace/same tablespace
SQL>alter table emp move tablespace mydata;
the above command will create a duplicate table and copies the data,then drops t
he original table.
After table move the corresponding indexes will become unusable because the row
id's will change.we need to use any of the below commnads to rebuild the indexes
.
To check which indexes become unusable
SQL>select index_name,status from dba_indexes where table_name='EMP';
To rebuild the index
SQL>alter index pk_emp rebuild;
SQL>alter index pk_emp rebuild online;
SQL>alter index pk_emp rebuild online nologging;prefer this because no logging e
xecute fast
Dropping Indexes
Monitor index usage:
ALTER INDEX <index_name> MONITORING USAGE;
SELECT index_name, used, monitoring
FROM V$OBJECT_USAGE
WHERE index_name = '<index_name>'
The optimizer avoids using nonselective indexes within query execution, but all
indexes defined
against a table must be maintained. Index maintenance can present a significant
CPU and I/O
resource demand in any write-intensive application. In other words, do not build
indexes unless
necessary.
For best performance, drop indexes that an application is not using. You can fin
d indexes that are not
being used by using the ALTER INDEX MONITORING USAGE functionality over a period
of time
that is representative of your workload. This records whether or not an index ha
s been used in the
V$OBJECT_USAGE view. If you find that an index has not been used, then drop it.
Make sure that
you are monitoring a representative workload to avoid dropping an index that is
used by a workload
that you did not sample.
Also, indexes within an application sometimes have uses that are not immediately
apparent from a
survey of statement execution plans. An example of this is a foreign key index o
n a parent table,
which prevents share locks from being taken out on a child table.
You can test the impact of dropping an index by setting it to INVISIBLE. An invi
sible index still
exists and is maintained, but is not used by the optimizer. If the index is need
ed, use the ALTER
INDEX
VISIBLE command.
Test the impact with an invisible index:
ALTER INDEX <index_name> INVISIBLE
Invalid PL/SQL objects and unusable indexes have an impact on performance. Inval
id PL/SQL
object must be recompiled before they can be used.
Invalid PL/SQL objects: The current status of PL/SQL objects can be viewed by qu
erying the data
dictionary. You can find invalid PL/SQL objects with:
SELECT object_name, object_type FROM DBA_OBJECTS
WHERE status = 'INVALID';
Invalid PL/SQL objects can be manually recompiled by using Enterprise Manager or
through SQL
commands:
ALTER PROCEDURE HR.add_job_history COMPILE;
Manually recompiling PL/SQL packages requires two steps:
ALTER PACKAGE HR.maintainemp COMPILE;
ALTER PACKAGE HR.maintainemp COMPILE BODY;
Unusable indexes are made valid by rebuilding them to recalculate the pointers.
Rebuilding an
unusable index re-creates the index in a new location and then drops the unusabl
e index. This can be
done either by using Enterprise Manager or through SQL commands:
ALTER INDEX HR.emp_empid_pk REBUILD;
ALTER INDEX HR.emp_empid_pk REBUILD ONLINE;
ALTER INDEX HR.email REBUILD TABLESPACE USERS;
--->use export/import,expdp/impdp,movetable & shirk compact
Space can be returned to the tablespace from a segment with following commands:
ALTER TABLE
SHRINK SPACE;
TRUNCATE TABLE [DROP STORAGE];
ALTER TABLE
DEALLOCATE UNUSED;
To shrink a table
sql>alter table scott.emp enable row movement;
sql>alter table scott.emp shrink space compact;
sql>alter table scott.emp diasble row movement;
As rowid's doesn't change with above commands,it is necessary to rebuild indexes
.while doing shriking,still users can access the table,but it will use full scan
instead of index scan.
Apart from table fragmentation we have tablespace fragmentation and that will oc
cur only in DMT(DICTIONARY MANAGED) or LMT(LOCAL MANAGED) with manual segmentspa
ce management.The only solution is to export & import the objects in that tables
pace.so it is always prefered to use LMT with ASSM.
ROW CHAINING
a)If the data size is more than block size,data will spread into multiple blocks
sql>conn scott/tiger
sql>alter session set sql_trace=true;
sql>select * from emp;
sql>alter session set sql_trace=false;
cd /$ORACLE_BASE/diag/rdbms/orcl/trace
ls -ltr *ora*.trc
orcl_ora____.trc latest trace file
tkprof orcl_ora____.trc chait_tkprof_report
vi chait_tkprof_report
or
Enabling SQL Trace
For your current session:
SQL> EXEC dbms_monitor.session_trace_enable;
SQL> EXECUTE dbms_session.set_sql_trace(true);
For any session:
SQL> EXECUTE dbms_system.set_sql_trace_in_session
(session_id, serial_id, true);
For instance-wide tracing:
SQL> EXEC dbms_monitor.database_trace_enable();
Disabling SQL Trace
For your current session:
SQL> EXEC dbms_monitor.session_trace_disable;
SQL> EXECUTE dbms_session.set_sql_trace(false);
For any session:
SQL> EXECUTE dbms_system.set_sql_trace_in_session
(session_id, serial_id, flase);
For instance-wide tracing:
SQL> EXEC dbms_monitor.database_trace_disable()
By default, the .trc file is named after the SPID. You can find the SPID in V$PR
OCESS. An easier
way of finding the file is the following:
SQL> ALTER SESSION SET TRACEFILE_IDENTIFIER = 'MY_FILE';
Then the trace file in TKPROF will include the 'MY_FILE' string.
ASH,ADDR,AWR
Automatic Workload Repository (AWR) is a builtin repository in every Oracle Data
base.At regular intervals, the database makes a snapshot of all its vital statis
tics and workload information and stores them in AWR. The Automatic Database Dia
gnostic Monitor (ADDM) analyzes the AWR data on a regular basis, then locates th
e root causes of performance problems, provides recommendations for correcting a
ny problems, and identifies nonproblem
areas of the system. Because AWR is a repository of historical performance data,
ADDM can be used to analyze performance issues after the event, often
saving time and resources reproducing a problem.
a) An ADDM Report (addmrpt.sql)
------------------------------The ADDM reporting utility creates a report of its database performance findings
. The addmrpt.sql script is found in the
$ORACLE_HOME/rdbms/admin directory. The output is written to the current working
directory as a text file.
SQL>@$ORACLE_HOME/rdbms/admin/addmrpt.sql
b) AWR Report (awrrpt.sql)
------------------------------The AWR reporting utility provides an overview of database performance within a
specified period of time. It essentially
computes the net change in database activity within the period. The awrrpt.sql s
cript is found in the
$ORACLE_HOME/rdbms/admin directory. The output is written to the current working
directory, please select HTML
format.
SQL>@$ORACLE_HOME/rdbms/admin/awrrpt.sql
c) ASH Report (ashrpt.sql)
---------------------------The ASH report utility is useful for determining the amount of active sessions,
what they were doing, and which SQL
statements were most active during a period of time. It is especially useful for
analyzing transient performance issues.
The ashrpt.sql script is found in the $ORACLE_HOME/rdbms/admin directory. The ou
tput is written to the current
working directory, please select HTML format.
SQL>@$ORACLE_HOME/rdbms/admin/ashrpt.sql
A SQL Tuning Set (STS) is a database object that includes one or more SQL statem
ents
along with their execution statistics and execution context, and could include a
user
priority ranking. The SQL statements can be loaded into a SQL Tuning Set from di
fferent
SQL sources, such as the Automatic Workload Repository, the cursor cache, or cus
tom SQL
provided by the user. An STS includes:
A set of SQL statements
Associated execution context, such as user schema, application module name and
action, list of bind values, and the cursor compilation environment
Associated basic execution statistics, such as elapsed time, CPU time, buffer ge
ts,
disk reads, rows processed, cursor fetches, the number of executions, the number
of complete executions, optimizer cost, and the command type
SQL statements can be filtered using the application module name and action, or
any of the
execution statistics. In addition, the SQL
statements can be ranked based on any
combination of execution statistics.
SQL Tuning Advisor---Used SQL Tuning Advisor to recommend improvements on SQL s
Manual SQL tuning is a complex process that presents many challenges. It require
s expertise in several
areas, is very time consuming, and requires an intimate knowledge of the schema
structures and the data
usage model of the application. All these factors make manual SQL tuning a chall
enging and resource
intensive task that is ultimately very expensive for businesses.
SQL Tuning Advisor is Oracle s answer to all the pitfalls and challenges of manual
SQL tuning. It
automates the SQL tuning process by comprehensively exploring all the possible w
ays of tuning a SQL
statement. The analysis and tuning is performed by the database engine s significa
ntly enhanced query
optimizer. Four types of analysis are performed by the SQL Tuning Advisor:
Statistics Analysis: The query optimizer needs up-to-date object statistics to g
enerate good
execution plans. In this analysis objects with stale or missing statistics are i
dentified and
appropriate recommendations are made to remedy the problem.
SQL Profiling: This feature, introduced in Oracle Database 10g, revolutionizes t
he approach to
SQL tuning. Traditional SQL tuning involves manual manipulation of application c
ode using
optimizer hints. SQL Profiling eliminates the need for this manual process and t
unes the SQL
statements without requiring any change to the application code. This ability to
tune SQL
without changing the application code also helps solve the problem of tuning pac
kaged
applications. Packaged application users now no longer need to log a bug with th
e application
vendor and wait for several weeks or months to obtain a code fix for tuning the
statement. With
SQL profiling the tuning process is automatic and immediate.
Access Path Analysis: Indexes can tremendously enhance performance of a SQL stat
ement by
reducing the need for full table scans. Effective indexing is, therefore, a comm
on tuning
technique. In this analysis new indexes that can be significantly enhance query
performance are
identified and recommended.
SQL Structure Analysis: Problems with the structure of SQL statements can lead t
o poor
performance. These could be syntactic, semantic, or design problems with the sta
tement. In this
analysis relevant suggestions are made to the restructure selected SQL statement
s for improved
performance.
The output of this analysis is in the form of recommendations, along with a rati
onale for each
recommendation and its expected performance benefit. The recommendation relates
to collection of
statistics on objects, creation of new indexes, restructuring of the SQL stateme
nt, or creation of a SQL
Profile. A user can choose to accept the recommendation to complete the tuning o
f the SQL statements.
SQL Access Advisor---Used SQL Access Advisor to recommend improvements on schema
structures
The design of the database schema can have a big impact on the overall applicati
on performance. SQL
Access Advisor, provides comprehensive advice on how to optimize schema design i
n order to maximize
application performance. SQL Access and SQL Tuning Advisors, together, provide a
complete solution for
tuning database applications. These two advisors automate all manual-tuning tech
niques currently
practiced and form the core of Oracle s automatic SQL tuning solution.
The SQL Access Advisor accepts input from all possible sources of interest, such
as the cursor cache, the
Automatic Workload Repository (AWR) any user-defined workload, and will even gen
erate a hypothetical
workload if a schema contains dimensions or primary/foreign key relationships. I
t comprehensively
analyzes the entire workload and provides recommendations to create new partitio
ns or indexes if
required, drop any unused indexes, create new materialized views and materialize
d view logs.
Determining the optimal partitioning or indexing strategy for a particular workl
oad is a complicated
process that requires expertise and time. SQL Access Advisor considers the cost
of insert/update/delete
operations in addition to the queries on the workload and makes appropriate reco
mmendations,
accompanied by a quantifiable measure of expected performance gain as well as sc
ripts needed to
implement the recommendations.
The SQL Access Advisor takes the mystery out of access design process. It tells
the user exactly what the
type of indexes, partitions, and materialized views are required to maximize app
lication performance. By
automating this very critical function, SQL Access Advisor obviates the need for
the error-prone, lengthy,
and expensive manual tuning process. It is fast, precise, easy-to-use and, toget
her with the SQL Tuning
Advisor, offers the most accurate and cost-effective solution for application pe
rformance tuning
SQL Performance Analyzer (SPA)
SPA is the tool of choice when you
erform
differently, when a change is made
statements
into a SQL tuning set from various
c Workload
Repository (AWR), and existing SQL
cuting each
SQL statement in isolation. The order of execution depends on the order of the s
tatement in the
tuning set. The STS includes bind variable, execution plan, and execution contex
t information.
With SPA, you will execute the STS and capture performance statistics, make the
change to the
system and execute the STS again, and then compare. SPA does not consider the im
pact that
SQL statements can have on each other.
SQL Performance Analyzer: Process
1. Capture SQL workload on production.
2. Transport the SQL workload to a test system.
3. Build before-change performance data.
4. Make changes.
5. Build after-change performance data.
6. Compare results from steps 3 and 5.
7. Tune regressed SQL.
Note: 390374.1 Oracle Performance Diagnostic
Guide
Note: 210014.1 How to Log a Good
Performance Service Request, to guide you in logging a performance service reques
t (SR).
Note: 330363.1 Remote Diagnostic Agent (RDA) 4 FAQ).
a. What are the SQL statements and their associated number of executions where t
he CPU
time consumed is greater than 200,000 microseconds?
SQL> SELECT sql_text, executions
2 FROM v$sqlstats
3 WHERE cpu_time > 200000;
b. What sessions logged in from the EDRSR9P1 computer within the last day?
SQL> SELECT * FROM v$session
2 WHERE machine = 'EDRSR9P1' and
3 logon_time > SYSDATE - 1;
c. What are the session IDs of any sessions that are currently holding a lock th
at is blocking
another user, and how long has that lock been held? (block may be 1 or 0; 1 indi
cates
that this session is the blocker.)
SQL> SELECT sid, ctime
2 FROM v$lock WHERE block > 0;
--->Query V$STATISTICS_LEVEL to determine which other parameters are affected by
the STATISTICAL_LEVEL parameter.
SQL> select statistics_name, activation_level
2 from v$statistics_level
3 order by 2;
V$SESSION_WAIT, V$SYSTEM_EVENT,
V$SERVICE_EVENT,
V$SESSION_EVENT (TIME_WAITED_MICRO column)
V$SQL, V$SQLAREA (CPU_TIME, ELAPSED_TIME
columns)
V$LATCH, V$LATCH_PARENT, V$LATCH_CHILDREN
(WAIT_TIME column)
V$SQL_WORKAREA, V$SQL_WORKAREA_ACTIVE
(ACTIVE_TIME column)
Views that include millisecond timings:
V$ENQUEUE_STAT (CUM_WAIT_TIME column)
--->AWR Snapshot Purging Policy
You control the amount of historical AWR statistics by setting a retention perio
d and a snapshot
interval. In general, snapshots are removed automatically in chronological order
. Snapshots that
belong to baselines are retained until their baselines are removed or expire. On
a typical system
with 10 active sessions, AWR collections require 200 MB to 300 MB of space if th
e data is kept
for seven days. The space consumption depends mainly on the number of active ses
sions in the
system. A sizing script, utlsyxsz.sql, includes factors such as the size of the
current
occupants of the SYSAUX tablespace, number of active sessions, frequency of snap
shots, and
retention time. The awrinfo.sql script produces a report of the estimated growth
rates of
various occupants of the SYSAUX tablespace. Both scripts are located in the
$ORACLE_HOME/rdbms/admin directory.
AWR handles space management for the snapshots. Every night the MMON process pur
ges
snapshots that are older than the retention period. If AWR detects that SYSAUX i
s out of space,
it automatically reuses the space occupied by the oldest set of snapshots by del
eting them. An
alert is then sent to the DBA to indicate that SYSAUX is under space pressure.
Setting an appropriate retention interval for your AWR is critical for proper da
ta retention, especially for predictive modeling. You can adjust the AWR retenti
on period according to your analysis needs.
In this example the retention period is specified as 3 years (1,576,800 minutes)
and the interval between each snapshot is 60 minutes.
execute dbms_workload_repository.modify_snapshot_settings (
interval => 60,
retention => 1576800);
To create a baseline
execute DBMS_WORKLOAD_REPOSITORY.CREATE_BASELINE (
start_snap_id IN NUMBER,
end_snap_id IN NUMBER,
baseline_name IN VARCHAR2);
--->Performance views is another name for the dynamic performance views, or V$ (
v-dollar) views, that hold raw statistics in memory.
--->Trace files are very difficult to interpret until they have been formatted w
ith the tkprof utility. The trcsess utility gives a unique tool for combining an
d filtering trace files to extract the statistics for a single session, service,
or module across multiple trace files.
--->Fetch Phase
The Oracle Database retrieves rows for a SELECT statement during the fetch phase
. Each fetch
typically retrieves multiple rows, using an array fetch. Array fetches can impro
ve performance by
reduce network round trips. Each Oracle tool offers its own ways of influencing
the array size; For
example, in SQL*Plus, you can change the fetch size by using the ARRAYSIZE setti
ng:
SQL> show arraysize
arraysize 15
SQL> set arraysize 50
SQL*Plus processes 15 rows at a time by default. Very high array sizes provide l
ittle or no
advantage.