You are on page 1of 14

Introduction

To improve concurrency, DB2 permits the deferral of row locks for Cursor Stability (CS) or Read Stability
(RS) isolation scans in some situations, until a record is known to satisfy the predicates of a query. By
default, when row locking is performed during a table or index scan, DB2 locks each row before
determining whether the row qualifies for the query. To improve concurrency of scans, it may be possible
to defer row locking until after determining that a row qualifies for a query. This lock deferral feature was
introduced with DB2 UDB V8.1.4 (FixPak 4) using the registry variables DB2_EVALUNCOMMITTED. In
DB2 UDB V8.2.2 (FixPak 9) this functionality was extended to include ISCAN-FETCH access plans. With
this variable enabled, predicate evaluation can occur on uncommitted data.
In addition, you might improve concurrency by setting the registry variables DB2_SKIPDELETED (in DB2
UDB V8.1.4) and DB2_SKIPINSERTED (in DB2 UDB V8.2.2). These registry variables permit scans to
unconditionally skip uncommitted deletes and inserts, respectively.
This article will review the basics of how DB2 handles concurrency, then take a look at how you can
realize performance gains with these new registry variables.

DB2 concurrency basics
In a multi-user database environment, transactions are usually executed simultaneously. Each
transaction has the potential to interfere with any other transaction that is running. Without any
appropriate locking mechanism, the following undesirable effects may occur:
Lost updates: Two applications, A and B, might both read the same row from the database, and both
calculate new values for one of its columns based on the data these applications read. If A updates the
row with its new value and B then also updates the row, the update performed by A is lost.
Access to uncommitted data: Application A might update a value in the database, and application B
might read that value before it was committed. Then, if the value of A is not later committed, but backed
out, the calculations performed by B are based on uncommitted (and presumably invalid) data.
Non-repeatable reads: Some applications involve the following sequence of events:
o Application A reads a row from the database, then goes on to process other SQL requests.
o In the meantime, application B either modifies or deletes the row and commits the change.
o Later, when application A attempts to read the original row again, it receives the modified row or
discovers that the original row has been deleted.
Phantom reads: The phantom read phenomenon occurs when:
o Your application executes a query that reads a set of rows based on some search criterion.
o Another application inserts new data or updates existing data that would satisfy your application's query.
o Your application repeats the query from the first step (within the same unit of work). Now some additional
(phantom ) rows are returned as part of the result set that were not returned when the query was
initially executed in the first step.
One way DB2 UDB enforces concurrency is through the use of isolation levels, which determine how data
used in one transaction is locked or isolated from other transactions while the first transaction accesses it.
Isolation levels
DB2 Universal Database provides different levels of protection to isolate the data from each of the
database applications while it is being accessed. These levels of protection are known as isolation levels,
or locking strategies. Choosing an appropriate isolation level ensures data integrity and also avoids
unnecessary locking. The isolation levels supported by DB2 are listed below, ordered in terms of
concurrency, starting with the maximum:
Uncommitted Read: The Uncommitted Read (UR) isolation level, also known as "dirty read," is the
lowest level of isolation supported by DB2. It can be used to access uncommitted data changes of other
applications. For example, an application using the Uncommitted Read isolation level will return all of the
matching rows for the query, even if that data is in the process of being modified and may not be
committed to the database. You need to be aware that if you are using Uncommitted Read, two identical
queries may get different results, even if they are issued within a unit of work, since other concurrent
applications can change or modify those rows that the first query retrieves.
Uncommitted Read transactions will hold very few locks. Thus they are not likely to wait for other
transactions to release locks. If you are accessing read-only tables or it is acceptable for the application
to retrieve uncommitted data updated by another application, use this isolation level, because it is most
preferable in terms of performance.
Cursor Stability: The Cursor Stability (CS) isolation level is the default isolation level and locks any row
on which the cursor is positioned during a unit of work. The lock on the row is held until the next row is
fetched or the unit of work is terminated. If a row has been updated, the lock is held until the unit of work
is terminated. A unit of work is terminated when either a COMMIT or ROLLBACK statement is executed.
An application using Cursor Stability cannot read uncommitted data. In addition, the application locks the
row that has been currently fetched, and no other application can modify the contents of the current row.
As the application locks only the row on which the cursor is positioned, two identical queries may still get
different results even if they are issued within a unit of work.
When you want the maximum concurrency while seeing only committed data from concurrent
applications, choose this isolation level.
Read Stability: The Read Stability (RS) isolation level locks those rows that are part of a result set. If you
have a table containing 100,000 rows and the query returns 10 rows, then only 10 rows are locked until
the end of the unit of work.
An application using Read Stability cannot read uncommitted data. Instead of locking a single row, it locks
all rows that are part of the result set. No other application can change or modify these rows. This means
that if you issue a query twice within a unit of work, the second run can retrieve the same answer set as
the first. However, you may get additional rows, as another concurrent application can insert rows that
match to the query.
Repeatable Read: The Repeatable Read (RR) isolation level is the highest isolation level available in
DB2. It locks all rows that an application references within a unit of work, no matter how large the result
set. In some cases, the optimizer decides during plan generation that it may get a table level lock instead
of locking individual rows, since an application using Repeatable Read may acquire and hold a
considerable number of locks. The values of the LOCKLIST and MAXLOCKS database configuration
parameters will affect this decision.
An application using Repeatable Read cannot read uncommitted data of a concurrent application. As the
name implies, this isolation level ensures the repeatable read to applications, meaning that a repeated
query will get the same record set as long as it is executed in the same unit of work. Since an application
using this isolation level holds more locks on rows of a table, or even locks the entire table, the
application may decrease concurrency. You should use this isolation level only when changes to your
result set within a unit of work are unacceptable.
When to choose which isolation level
When you choose the isolation level for your application, decide which concurrency problems are
unacceptable for your application and then choose the isolation level which prevents these problems.
Remember that the more protection you have, the less concurrency is available.
Use the Uncommitted Read isolation level only if you use queries on read-only tables, or if you are using
only SELECT statements and getting uncommitted data from concurrent applications is acceptable. This
isolation level provides the maximum concurrency.
Use the Cursor Stability isolation level when you want the maximum concurrency while seeing only
committed data from concurrent applications.
Use the Read Stability isolation level when your application operates in a concurrent environment. This
means that qualified rows have to remain stable for the duration of the unit of work.
Use the Repeatable Read isolation level if changes to your result set are unacceptable. This isolation
level provides minimum concurrency.
How to specify isolation levels
If no isolation level is specified, DB2 Universal Database will use a default of Cursor Stability (CS). The
following options are available to specify the current isolation level:
At the statement level:
SELECT (INTO), DELETE, UPDATE ... WITH {RR, RS, CS, UR}
For dynamic SQL within the current session:
SET CURRENT ISOLATION = {RR, RS, CS, UR} or SET ISOLATION {RR, RS, CS, UR}
At precompile or bind time:
ISOLATION {RR, RS, CS, UR} option of the command line processor PREP or BIND commands
From the DB2 Call Level Interface (DB2 CLI):
For DB2 CLI, change the isolation level as part of the DB2 CLI configuration (db2cli.ini). This can be done
either by:
o UPDATE CLI CFG FOR SECTION sectionname USING TXNISOLATION {1, 2, 4, 8}
where:
1 = SQL_TXN_READ_UNCOMMITTED - Read Uncommitted (Uncommitted Read)
2 = SQL_TXN_READ_COMMITTED (default) - Read Committed (Cursor Stability)
4 = SQL_TXN_REPEATABLE_READ - Repeatable Read (Read Stability)
8 = SQL_TXN_SERIALIZABLE - Serializable (Repeatable Read)
o editing db2cli.ini and setting applTxnIsolation = {1, 2, 4, 8}
From CLI or ODBC at run time:
Use the SQLSetConnectAttr function with the SQL_ATTR_TXN_ISOLATION attribute to set the
transaction isolation level for the current connection referenced by the ConnectionHandle.
When working with JDBC or SQLJ at run time:
Use the setTransactionIsolation method in the java.sql interface connection, where:
TRANSACTION_SERIALIZABLE = Repeatable read
TRANSACTION_REPEATABLE_READ = Read stability
TRANSACTION_READ_COMMITTED = Cursor stability
TRANSACTION_READ_UNCOMMITTED = Uncommitted read
On a Command Line Processor (CLP) Connection:
CHANGE ISOLATION TO {RR, RS, CS, UR, NC} where NC stands for No Commit
On database servers that support REXX:
CHANGE SQLISL TO {RR|CS|UR|RS|NC} {RR, RS, CS, UR, NC}
Locks, lock wait, deadlock, and lock escalation
Locks:
DB2 Universal Database isolates transactions from each other through the use of locks. A lock is a
mechanism that is used to associate a data resource with a single transaction, with the purpose of
controlling how other transactions interact with that resource while it is associated with the owning
transaction. (The transaction that a locked resource is associated with is said to "hold" or "own" the lock).
The DB2 Database Manager uses locks to prohibit transactions from accessing uncommitted data written
by other transactions (unless the Uncommitted Read isolation level is used) and to prohibit the updating
of rows by other transactions when the owning transaction is using a restrictive isolation level. Once a
lock is acquired, it is held until the owning transaction is terminated (COMMIT or ROLLBACK). At that
point, the lock is released and the data resource is made available to other transactions.
Lock Wait:
If one transaction attempts to access a data resource in a way that is incompatible with the lock being
held by another transaction, that transaction must wait until the owning transaction has ended. When a
lock wait event occurs, the transaction attempting to access the data resource simply stops execution
until the owning transaction has terminated and the incompatible lock is released.
Deadlock:
A deadlock occurs when two or more transactions are in a cycle of lock wait for one another, and a lock
timeout does not break the cycle.
Lock Escalation:
When the DB2 Database Manager escalates a lock from a smaller lock granularity to a higher one due to
insuffient lock memory space (for example, when it converts many row locks into a single table lock), this
is called lock escalation. This process is depending on two database parameters: LOCKLIST (amount of
memory allocated for the locks) and MAXLOCKS (maximum percent of lock list before escalation).
Monitoring locks and lock information
MONITOR SWITCH -- In order to gain full monitor lock information, turn on the lock monitor switch (or
reset it, if it is already turned on):
o UPDATE MONITOR SWITCHES USING LOCK ON (or OFF)
o RESET MONITOR ALL (or FOR DB <database>)
GET SNAPSHOT FOR -- You can retrieve lock information by issuing a GET SNAPSHOT command for
the following objects:
o ALL DATABASES
o ALL APPLICATIONS
o APPLICATION APPLID <appl-id> (or AGENTID <appl-handle>)
o LOCKS ON <database>
o LOCKS FOR APPLICATION APPLID <appl-id> (or AGENTID <appl-handle>)
db2pd problem determination tool -- DB2 UDB V8.2 introduced a new monitoring and troubleshooting
tool called db2pd. You can use this tool to retrieve lock information:
o db2pd -db <database> -locks showlocks
o db2pd -db <database> -locks wait
o db2pd -db <database> -transactions
o db2pd -db <database> -agents
APIs -- You can capture database snapshots using the snapshot monitor API in your C, C++, or COBOL
application:
o db2MonitorSwitches
o db2GetSnapshot
o db2GetSnapshotSize
o db2ResetMonitor
Snapshot monitor SQL table functions -- You can use SQL queries to retrieve some lock-related
snapshot monitor SQL table functions:
o SNAPSHOT_LOCK
o SNAPSHOT_APPL
o SNAPSHOT_LOCKWAIT
o SELECT * FROM TABLE(SNAPSHOT_APPL('<database>', -1)) AS SNAPSHOT_APPL
o SELECT agent_id, appl_id FROM TABLE(SNAPSHOT_APPL(CAST (NULL as VARCHAR), -1)) AS
SNAPSHOT_APPL
Reducing lock waits and minimizing deadlocks
You can reduce lock waits and deadlocks by:
Issuing COMMIT statements at the right frequency
Choosing the appropriate isolation level
Specifying the FOR FETCH ONLY clause in the SELECT statement
Releasing read locks using the WITH RELEASE option of the CLOSE CURSOR statement, if acceptable
Using type-2 indexes to eliminate next key locks
Tuning the LOCKLIST and MAXLOCKS database configuration parameters to avoid lock escalations that
impact concurrency
Summary of lock-related parameters and registry variables
Database configuration parameters:
LOCKLIST: Indicates the amount of storage that is allocated to the lock list, in 4K pages.
MAXLOCKS: Defines the minimum percentage of the lock list held by an application that must be filled
before the database manager performs lock escalation.
LOCKTIMEOUT: Specifies the number of seconds that an application will wait to obtain a lock. This
parameter helps avoid global deadlocks for applications.
DLCHKTIME: The deadlock check interval defines the frequency at which the database manager checks
for deadlocks among all the applications connected to a database.
Registry Variables:
DB2LOCK_TO_RB: Specifies whether lock timeouts cause the entire transaction to be rolled back, or
only the current statement. If DB2LOCK_TO_RB is set to STATEMENT, locked timeouts cause only the
current statement to be rolled back. Any other setting results in transaction rollback.
DB2_KEEPTABLELOCK: Allows DB2 UDB to maintain the table lock when an Uncommitted Read or
Cursor Stability isolation level is closed. In this case, the table lock is released at the end of the
transaction, just as it would be released for Read Stability and Repeatable Read scans.
DB2_MAX_NON_TABLE_LOCKS: defines the maximum number of NON table locks a transaction can
have before it releases all of these locks. NON table locks are table locks that are kept in the hash table
and transaction chain even when the transaction has finished using them. Because transactions often
access the same table more than once, retaining locks and changing their state to NON can improve
performance.
DB2_EVALUNCOMMITTED, DB2_SKIPDELETED and DB2_SKIPINSERTED: These registry variables
will be covered in detail in the following sections.

Locking behaviour without lock deferral
To provide a deeper insight into locking behavior, consider the following sample business scenario:
Manager Sanders in our organization is going to be retired and will be replaced by a newly hired
employee, Thomson. Tables ORG and STAFF from the SAMPLE database will be affected with this
organizational change.
The ORG and the STAFF tables from the SAMPLE database currently have the following entries:
Listing 1. ORG database
DEPTNUMB DEPTNAME MANAGER DIVISION LOCATION
-------- -------------- ------- ---------- -------------
10 Head Office 160 Corporate New York
15 New England 50 Eastern Boston
20 Mid Atlantic 10 Eastern Washington
38 South Atlantic 30 Eastern Atlanta
42 Great Lakes 100 Midwest Chicago
51 Plains 140 Midwest Dallas
66 Pacific 270 Western San Francisco
84 Mountain 290 Western Denver
Listing 2. STAFF database
ID NAME DEPT JOB YEARS SALARY COMM
------ --------- ------ ----- ------ --------- ---------
10 Sanders 20 Mgr 7 18357.50 -
20 Pernal 20 Sales 8 18171.25 612.45
30 Marenghi 38 Mgr 5 17506.75 -
40 O'Brien 38 Sales 6 18006.00 846.55
50 Hanes 15 Mgr 10 20659.80 -
60 Quigley 38 Sales - 16808.30 650.25
70 Rothman 15 Sales 7 16502.83 1152.00
80 James 20 Clerk - 13504.60 128.20
90 Koonitz 20 Sales 6 18001.75 1386.70
100 Plotz 42 Mgr 7 18352.80 -
110 Ngan 15 Clerk 5 12508.20 206.60
120 Naughton 38 Clerk - 12954.75 180.00
130 Yamaguchi 42 Clerk 6 10505.90 75.60
140 Fraye 51 Mgr 6 21150.00 -
150 Williams 51 Sales 6 19456.50 637.65
160 Molinare 10 Mgr 7 22959.20 -
170 Kermisch 15 Clerk 4 12258.50 110.10
180 Abrahams 38 Clerk 3 12009.75 236.50
190 Sneider 20 Clerk 8 14252.75 126.50
200 Scoutten 42 Clerk - 11508.60 84.20
210 Lu 10 Mgr 10 20010.00 -
220 Smith 51 Sales 7 17654.50 992.80
230 Lundquist 51 Clerk 3 13369.80 189.65
240 Daniels 10 Mgr 5 19260.25 -
250 Wheeler 51 Clerk 6 14460.00 513.30
260 Jones 10 Mgr 12 21234.00 -
270 Lea 66 Mgr 9 18555.50 -
280 Wilson 66 Sales 9 18674.50 811.50
290 Quill 84 Mgr 10 19818.00 -
300 Davis 84 Sales 5 15454.50 806.10
310 Graham 66 Sales 13 21000.00 200.30
320 Gonzales 66 Sales 4 16858.20 844.00
330 Burke 66 Clerk 1 10988.00 55.50
340 Edwards 84 Sales 7 17844.00 1285.00
350 Gafney 84 Clerk 5 13030.50 188.00
As described in Table 1, session 1 is executing the SQL statements for realizing the organizational
change within a single unit of work. In parallel, session 2 is running a SELECT on table ORG while the
UPDATE in session 1 has not yet been commited. All transactions are executed with the default Cursor
Stability (CS) isolation level.
Table 1. Transactions for updating organization changes
Session 1 Session 2
db2 CONNECT TO SAMPLE

db2 +c INSERT INTO STAFF VALUES(15, 'Thomson',
20, 'Mgr', 0, 17000, NULL)

db2 +c UPDATE ORG SET MANAGER = 15 WHERE
DEPTNUMB = 20

db2 +c DELETE FROM STAFF WHERE ID = 10

db2 INSERT INTO HR_LOG VALUES(CURRENT
TIMESTAMP, 'Mgr Sanders successfully replaced
by Mgr Thomson')

db2 COMMIT
db2 CONNECT TO SAMPLE



db2 "SELECT * FROM ORG WHERE
MANAGER >= 100 AND MANAGER
<= 300"
Issuing the commands LIST APPLICATIONS SHOW DETAIL and GET SNAPSHOT FOR LOCKS FOR
APPLICATION AGENT ID xxxx (optionally GET SNAPSHOT FOR DYNAMIC SQL ON SAMPLE)
reveals the current locking behaviour for the UPDATE command in session 1 and the SELECT command
in session 2:
Table 2. Transaction locking
Session 1 (UPDATE Statement)
Locks Held
Lock Mode Object Type Table Name
Exclusive (X) Row ORG
Share (S) Internal Plan Lock
Intent Exclusive (IX) Table ORG
Session 2 (SELECT Statement)
Locks Waiting
Lock Mode Object Type Table Name
Exclusive (X) Row ORG
The uncommitted UPDATE in session 1 holds an exclusive record lock on the updated row in table ORG,
prohibiting the SELECT query in session 2 from returning, even though the row being inserted in session
1 does not currently satisfy the query in session 2. This is because the CS isolation level dictates that any
row accessed by a query must be locked while the cursor is positioned on that row. Session 2 cannot
obtain a lock on the first row until session 1 releases its lock.

Lock deferral with DB2_EVALUNCOMMITTED
If you know your application behavior well, and you can tolerate the potential evaluation of query
predicates against uncommitted data, then you should use lock deferral. Doing so may improve
application concurrency.
DB2_EVALUNCOMMITTED allows scans with isolation level Cursor Stability (CS) or Read Stability (CS)
to avoid or defer row locking until a data row is known to satisfy predicate evaluation. Rows which don't
satisfy your query are bypassed.
Regarding the sample shown in table 1, when scanning the table, you can avoid the lock wait in session 2
by using the evaluate uncommitted feature. This evaluates the predicate first, and then locks the row for a
true predicate evaluation. As such, the query in session 2 would not attempt to lock the third row in the
table, thereby increasing application concurrency. This would also mean that predicate evaluation in
session 2 would occur with respect to the uncommitted value of DEPTNUMB = 15 in session 1. The query
in session 2 would omit the third row in its result set, despite the fact that a rollback of the update in
session 1 would satisfy the query in session 2.
If the order of operations were reversed, concurrency could still be improved with evaluate uncommitted.
Under default locking behavior, session 2 would first acquire a row lock, prohibiting the searched
UPDATE in session 1 from executing, even though the UPDATE in session 1 would not change the row
locked by the query of session 2. If the searched UPDATE in session 1 first attempted to examine rows
and then only lock them if they qualified, the query in session 1 would be non-blocking.
DB2_EVALUNCOMMITTED is enabled by issuing a db2set DB2_EVALUNCOMMITTED=ON command
and restarting DB2. Now the SELECT * FROM ORG WHERE MANAGER >= 100 AND MANAGER <=
300 will deliver the following results:
Listing 3. Results from the SELECT after uncommitted UPDATE with
DB2_EVALUNCOMMITTED=ON
DEPTNUMB DEPTNAME MANAGER DIVISION LOCATION
-------- -------------- ------- ---------- -------------
10 Head Office 160 Corporate New York
42 Great Lakes 100 Midwest Chicago
51 Plains 140 Midwest Dallas
66 Pacific 270 Western San Francisco
84 Mountain 290 Western Denver
Now assume that the INSERT INTO STAFF with the ID 15 has not been committed yet, and a SELECT
* FROM STAFF WHERE ID >= 100 AND ID < 150 has been issued by session 2. With
DB2_EVALUNCOMMITTED=ON enabled, our SELECT will not have to wait for the exclusive record lock
from the INSERT, but will deliver the following results:
Listing 4. Results from the SELECT after uncommitted INSERT with
DB2_EVALUNCOMMITTED=ON
ID NAME DEPT JOB YEARS SALARY COMM
------ --------- ------ ----- ------ --------- ---------
100 Plotz 42 Mgr 7 18352.80 -
110 Ngan 15 Clerk 5 12508.20 206.60
120 Naughton 38 Clerk - 12954.75 180.00
130 Yamaguchi 42 Clerk 6 10505.90 75.60
140 Fraye 51 Mgr 6 21150.00 -
150 Williams 51 Sales 6 19456.50 637.65
Finally, look at the DELETE FROM STAFF WHERE ID = 10. The DELETE has not been committed yet
and the same SELECT * FROM STAFF WHERE ID >= 100 AND ID < 150 has been issued by
session 2. With DB2_EVALUNCOMMITTED=ON enabled, the SELECT will deliver the same results as
described after the uncommitted INSERT above.
Restrictions on deferred locking
The isolation level must be CS or RS.
Deferred locking requires that row locking is to occur.
SARGable evaluation predicates must exist.
Evaluation uncommitted is not applicable to scans on the catalog tables.
For MDC tables, block locking can be deferred for an index scan, but not for table scans.
Deferred locking will not occur on a table which is executing an in-place table reorg.
Deferred locking will not occur for an index scan where the index is type-1.
Deferred locking does not apply to internal scans during insert, update, or delete actions for integrity
processing -- such as RI cascade/validation or MQT maintenance.
For ISCAN-FETCH plans, row locking is not deferred to the data access stage, but rather the row is
locked during index access before moving the row in the table.
Deleted rows are unconditionally skipped for table scans, while deleted type-2 index keys are only
skipped if the registry variable DB2_SKIPDELETED is enabled.
DB2_SKIPINSERTED
By default, DB2 waits until the INSERT transaction completes (commits or rolls back), and processes
data accordingly. This will make sense when an application modifies data by deleting the data and
inserting the new image of the data, or if two applications use a table to pass data between themselves
with the first application inserting data into the table and the second one reading it. The data must be
processed by the second application in the order presented in the table, such that if the next row to be
read is being inserted by the first application, the second application must wait until the insert is
committed.
With DB2_SKIPINSERTED=ON, uncommitted insertions can be ignored for cursors using the CS or RS
isolation levels. DB2 will unconditionally bypass any uncommitted rows in a query.
Regarding the sample shown in Table 1, assume that the INSERT of the new ID 15 has not been
committed yet and a SELECT * FROM STAFF WHERE ID >= 10 AND ID <= 50 has been issued by
session 2. With DB2_SKIPINSERTED=ON enabled, the SELECT will deliver the following results:
Listing 5. Results from the SELECT with DB2_SKIPINSERTED=ON
ID NAME DEPT JOB YEARS SALARY COMM
------ --------- ------ ----- ------ --------- ---------
10 Sanders 20 Mgr 7 18357.50 -
20 Pernal 20 Sales 8 18171.25 612.45
30 Marenghi 38 Mgr 5 17506.75 -
40 O'Brien 38 Sales 6 18006.00 846.55
50 Hanes 15 Mgr 10 20659.80 -
In the same way, a SELECT * FROM STAFF without any query predicate will return all records except
the uncommitted inserted one.
If you issue a GET SNAPSHOT FOR LOCKS FOR APPLICATION AGENT ID xxxx command, you can
see that the INSERT lock has a new attribute 0x00000008:
Listing 6. New lock attribute for DB2_SKIPINSERTED=ON
Lock Name = 0x02000300270000000000000052
Lock Attributes = 0x00000008
Release Flags = 0x40000000
Lock Count = 1
Hold Count = 0
Lock Object Name = 39
Object Type = Row
Tablespace Name = USERSPACE1
Table Schema = WSCHUE
Table Name = STAFF
Mode = X
DB2_SKIPDELETED
DB2_SKIPDELETED acts in the same manner for deletions as DB2_SKIPINSERTED does for insertions.
When set to ON, it allows uncommitted deletions to be ignored for cursors using the CS or RS isolation
levels. DB2 will unconditionally bypass any uncommitted rows in a query.
Regarding the sample shown in Table 1, assume that the DELETE FROM STAFF WHERE ID = 10 has
not been committed yet, and a SELECT * FROM STAFF WHERE ID >= 10 AND ID <= 50 has been
issued by session 2. With DB2_SKIPDELETED=ON enabled, the SELECT will deliver the following results:
Listing 7. Results from the SELECT with DB2_SKIPDELETED=ON
ID NAME DEPT JOB YEARS SALARY COMM
------ --------- ------ ----- ------ --------- ---------
20 Pernal 20 Sales 8 18171.25 612.45
30 Marenghi 38 Mgr 5 17506.75 -
40 O'Brien 38 Sales 6 18006.00 846.55
50 Hanes 15 Mgr 10 20659.80 -
In the same way, a SELECT * FROM STAFF without any query predicate will return all records except
the uncommitted deleted one.
If you issue a GET SNAPSHOT FOR LOCKS FOR APPLICATION AGENT ID xxxx command, you can
see that the DELETE lock has a new attribute 0x00000020:
Listing 8. New lock attribute for DB2_SKIPDELETED=ON
Lock Name = 0x02000300040000000000000052
Lock Attributes = 0x00000020
Release Flags = 0x40000000
Lock Count = 1
Hold Count = 0
Lock Object Name = 4
Object Type = Row
Tablespace Name = USERSPACE1
Table Schema = WSCHUE
Table Name = STAFF
Mode = X

New SQL to query the current DB2 registry
To list the DB2 registry variables, you can issue a db2set -all command:
Listing 9. db2set -all
[e] DB2PATH=E:\SQLLIB
[i] DB2_SKIPINSERTED=ON
[i] DB2_EVALUNCOMMITTED=ON
[i] DB2_SKIPDELETED=ON
[i] DB2ACCOUNTNAME=C99T68FK\wschue
[i] DB2INSTOWNER=C99T68FK
[i] DB2PORTRANGE=60000:60003
[i] DB2_GRP_LOOKUP=LOCAL
[i] DB2INSTPROF=E:\SQLLIB
[i] DB2COUNTRY=DE
[i] DB2COMM=TCPIP
[i] DB2CODEPAGE=1208
[g] DB2_EXTSECURITY=YES
[g] DB2_DOCCDPATH=E:\SQLLIB\
[g] DB2SYSTEM=C99T68WS
[g] DB2PATH=E:\SQLLIB
[g] DB2INSTDEF=DB2
[g] DB2ADMINSERVER=DB2DAS00
The output from db2set may return values which have been set but are not yet active. To help you
determine which values were in effect when the instance was started, DB2 UDB V8.2.2 includes a new
REG_LIST_VARIABLES table function.
This new table function not only returns the currently active registry variables, but also makes it easier to
monitor remote DB2 UDB servers without having to physically log in to each server and explicitly issue
the db2set command. For a partitioned database, this function returns the effective registry for the
partition that you are connected to.
Listing 10. New REG_LIST_VARIABLES table function
db2 SELECT
DBPARTITIONNUM,
SUBSTR(REG_VAR_NAME, 1, 25) AS "REG_VAR_NAME",
SUBSTR(REG_VAR_VALUE, 1, 15) AS "REG_VAR_VALUE",
LEVEL
FROM TABLE(SYSPROC.REG_LIST_VARIABLES()) AS REGISTRYINFO


DBPARTITIONNUM REG_VAR_NAME REG_VAR_VALUE LEVEL
-------------- ------------------------- --------------- -----
0 DB2ADMINSERVER DB2DAS00 G
0 DB2CODEPAGE 1208 I
0 DB2COMM TCPIP I
0 DB2COUNTRY DE I
0 DB2INSTDEF DB2 G
0 DB2INSTPROF E:\SQLLIB I
0 DB2PATH E:\SQLLIB E
0 DB2SYSTEM C99T68WS G
0 DB2_GRP_LOOKUP LOCAL I
0 DB2TEMPDIR E:\SQLLIB\ E
0 DB2PORTRANGE 60000:60003 I
0 DB2INSTOWNER C99T68FK I
0 DB2ACCOUNTNAME C99T68FK\wschue I
0 DB2_SKIPDELETED ON I
0 DB2_EVALUNCOMMITTED ON I
0 DB2_DOCCDPATH E:\SQLLIB\ G
0 DB2_EXTSECURITY YES G
0 DB2_SKIPINSERTED ON I

Conclusion
Lock avoidance is a helpful new feature in DB2 Universal Database that can improve concurrency for CS
or RS isolation scans. It is implemented by setting the DB2_EVALUNCOMMITTED,
DB2_SKIPINSERTED, and DB2_SKIPDELETED registry variables. These registry variable settings apply
at compile time for dynamic SQL, and at bind time for static SQL.
With DB2_EVALUNCOMMITTED turned on, you can avoid locking rows that are not part of the final result
set.
By enabling DB2_SKIPINSERTED, you can skip any row that has been locked because it is an
uncommitted inserted row. But you might not want to use this registry variable where skipping
uncommitted inserts may not be preferred, as in situations when two applications use a table to pass data
between them or when an application does not use UPDATE statements, but instead deletes the old data
and then inserts the new data.
DB2_SKIPDELETED is used to skip uncommitted deletions. When this registry variable is not set, CS and
RS isolation levels require the processing of committed deleted data only.

You might also like