You are on page 1of 92

1. Tell me something about yourself?

Explain your education, Family background and work experience.

2. What are the system roles and status by default?

sa_role, sso_role and oper_role are system roles. They are on by default.

3.What are the daily activities as a Sybase DBA?

check the status of the server (using ps eaf |grep servername) or

with showserver at OS level or

try to login

if it fails we should understand that server is not up then we have

to start the server after looking the errorlogs.

check the size the file system (df k).

check the status of the database (sp_helpdb)

check the schedule cron job

check whether any process is blocked (sp_who and sp_lock)

see if we have to take backups / load database

check the errorlog

4.What are the default databases in ASE-12_5?

master, model, tempdb, sybsystemprocs, sysbstemdb

optional dbs pubs2,pubs3, sybsecurity, audit, dbccdb


5.Tell about your work environment?

I worked on ASE 12.5.3 on Solaris 8 version.

Altogether we have 4 ASE servers on 4 different Solaris boxes

Out of them 2 or productions boxes ,1 is UAT and 1 is Dev servers

On production boxes we have 2 cpus on each box, on UAT we have 2 cpus and on Dev server 4
cpus.

Total we have 180 databases, 60@ prod and 60 @ dev.

Biggest database size was 30GB

No of users 5000 in production.

We were handling the tickets received through emails (any production issues).

6.If production server went down what all the steps u will follow?

First I will intimate to all the application mangers and they will send an alert message to all the
users regarding the down time.

Then I will look into the errorlog and take relevant action based on the error message, If I
couldnt solve the issue, I will intimate to my DBA manager further log the case with Sybase as
priority P1 (System down).

7. What will you do If you heard Server performance is down?

First check the network transfer rate using ping -t network port, might be the network problem,
will contact the network people, make sure that tempdb size is good enough to perform the user
connections, mostly tempdb size should be 25% of all the users database size. Make sure that
we run the update statistics and recompile the stored procedures sp_recompile on regular basis,
also check the database fragment level, if necessary defrag exercise, run the
sp_sysmon , sp_monitor and analyze from the output like cpu utilization etc.,

8.Query performance down?


Based on the query first will run the set show plan on to see how the query is being executed,
and analyze the output, based on the output will tune the query, if necessary we should create
indexes on the used tables. And also based on the output I will check whether the optimizer is
picking the right plan or not, run the optdiag to check when the last we had run the update
statistics as optimization of the query depends on the statistics, run the sp_recompile, so that the
stored procedures will pick the new plan based on the current statistics.

9. What all the precautions you will take to avoid the same type of problem?

We never had an issue, I will document the thing with steps taken to resolve the

issue.

10. If the time comes such that you had to take Important decision, but your reporting
manager is not there, so how you will decide?

I will approach my project managers boss, will explain the situation and seek the permission
from him, if hes not available then I will take the call, and will keep all the application managers
in the loop.

11. How do check the current running processes?

ps eaf

12. Can u create your own sps for system wise?

Yes, we can, say for example we create the SPs to check the fragment level etc., etc.,

13. What u need to do is issue an ASE kill command on the connection then un-suspend the
db?

select lct_admin(unsuspend,db_id(db_name))
14. What command helps you to know the process running on this port, but only su can
run this command?

/var/tmp/lsof | grep 5300 (su)

netstat -anv | grep 5300 (anyone)

15. For synchronizing the logins from lower version to higher version, just take the 11.9.2
syslogins structure, go to 12.5 higher version server?

create the table named as logins in the tempdb will this structure, run bcp in into this login table,
next use master to run the following commands, insert into syslogins select *,null,null from
tempdb..logins

16. How to delete UNIX files which are more than 3 days old?

You must be in the parent directory of snapshots and execute the below command

find snapshots type f -mtime +3 exec rm{};

find /backup/logs/ -name daily_backup* -mtime +21 -exec rm f{};

17. How to find the time taken for rollback of the processed?

kill 826 with statusonly

18. What is the difference between truncate_only & no_log?

(i)truncate_only: It is used to truncate the log gracefully. It checkpoints the database before the
truncating the Database. Truncate only removes the inactive part of the log without
making a backup copy. Use on databases without log segments on a separate device from data
segments. Dont specify a dump device or backup server name. NOTE:Use dump transaction
with no_log as a last resort and use it only after dump transaction truncate_only fails.
(ii) no_log: Use no log when your transaction log is completely full and no_log doesnt

checkpoint the database before the dumping the log,no log removes the inactive part of the log
without making a backup copy, and without recording the procedure in the transaction log. Use
no log only when you have totally run out of the log space and cant run usual dump transaction
command. Use no _log as last resort and use it only after dump transaction with truncate _only
fails.

When to use dump transaction that truncate_ only or with no_log

When the log is on the same segment as the data. Dump transaction with truncate only to
truncate the log.

Youre not concerned with the recovery of recent transactions ( for example, in an early
development environment). Dump transaction with truncate_only to truncate the log your usual
method of dumping the transaction log (either the standard dump transaction command or dump
transaction with truncate only) fails because of insufficient log space. Dump transaction with
no_log to truncate the log without recording the event.

Note: dump database immediately afterward to copy the entire database, including the log.

NOTE: You should always use truncate_only. There are times when there is
absolutely no space left in the tran log, and you will have to use the
no_log option which truncates the tran log but does not write into the
transaction log. A dump tran with truncate_only does write into the tran
log.

19. Define Normalization?

It is a process of designing database schema, where in eliminating the redundancy of

columns and inconsistency of database.

Normalization is the process of breaking your data into separate components to reduce the
repetition of data. Normalization can be up to 5 level. Each level of normalization reduce the
repetition of data. it can be first/Second/Third and BCNF
Actually Normalization is the process of organizing data to minimize redundancy.
Normalization usually involves dividing a database into two or more tables and defining
relationships between the tables. The objective is to isolate data so that additions, deletions, and
modifications of a field can be made in just one table and then propagated through the rest of the
database via the defined relationships.
Basically you have to normalize your database upto 3 levels
1st normal form
2nd normal form
3rd normal form

Normalization is the process of organizing data to minimize redundancy is called normalization.


The goal of database normalization is to decompose relations with anomalies in order to produce
smaller, well-structured relations. Normalization usually involves dividing large, badly-formed
tables into smaller, well-formed tables and defining relationships between them.
(Reference Wiki )

The purpose of Database Normalization is to eliminate data redundancy and inconsistent


dependency. The REDUNDANT data wastes disk spaces and creates maintenance problems. For
example if the customer name is stored in more than one place then it must be changed/delete to
all places at the time of update and delete. It also increases the processing time. Inconsistent
dependency can make data difficult to access because the path to data is missing or broken.

There are certain rules for database normalization; each rule is called a normal form. If the first
rule is observed then we can say the database is in first normal form, If first three rules are
observed then we can say the database is considered to be Third Normal formal. There are other
rules too like 4th Normal form, 5Th normal form.

First Normal Form

Remove repeating group of information.

Assign Primary key.

Each attribute is atomic; it should not contain multiple values.

Second Normal Form

Remove redundant data to a separate table

Relate this table with foreign key.

Third Normal Form

Remove columns that do not depend on Primary Key.

20.What are the types of normalization?


First, normal form

The rules for First Normal Form are:

Every column must be atomic. It cannot be decomposed into two or more subcolumns.

You cannot have multivalued columns or repeating groups

Each row and column position can have only one value.

Second normal form

For a table to be in second normal form, every non-key field must depend on the entire primary
key, not on part of a composite primary key. If a database has only single-field primary keys, it
is automatically in Second normal form.

Third normal form

For Table to be in Third normal form, a non-key field cannot depend on another non-key field.

21. What are the precautions taken to reduce the down time?

disk mirroring or warm stand by.

22. What are the isolation levels?, list different isolation levels in Sybase & what is default

To avoid the manual overriding of locking, we have transaction isolation level which is tied with
transaction.

List of different isolation levels are isolation level 0,1,2,3.

Isolation level 0-This allows reading pages that currently are being modified. It allows dirty
read

Isolation level 1 this allows read operation can only read pages. No dirty reads are allowed.
Isolation level 2-Allows a single page to be read many times within same transaction and
guarantees that same value is read each time. This prevent other users to read

Isolation level 3 preventing another transaction from updating, deleting or inserting rows for
pages previously read within transaction

Isolation level 1 is the default.

23. What is optdiag?

The optdiag utility displays statistics from the systabstats and systatistics
tables. optdiag can also be used to update systatistics information. Only a sa can run
the optdiag (A command line tool for reading, writing and simulating table, index, and column
statistics).

Advantages of optdiag

optdiag can display statistics for all the tables in a database, or for a single table

optdiag output contains addition information useful for understanding query costs, such as index
height and the average row length.

optdiag is frequently used for other tuning tasks, so you should have these reports on hand

Disadvantages of optdiag

It produces a lot of output, so if you need only a single piece of information, such as the number
of pages in the table, othermethods are faster and have lower systems overhead.

NOTE:What are the default character set and sort order after installation of Sybase ASE
15

The default character set is cp850, which supports the English language, upper case and lower
case, and any special accent characters that are used in European languages.

The default sort order that goes with the character set is binary, which is the fastest of sorts
when building index structures or during execution of order by clauses.
24. How frequently you defrag the database?

Whenever there are insertions, updations & deletions in a table we do defrag.

25. In 12.5 how to configure procedure cache?

sp_cacheconfig

26. What are the default page sizes in ASE 12.5?

Default page sizes are 2K,4K,8K,16K

28. How do you see the performance of the Sybase server?

Using sp_sysmon, sp_monitor, sp_who and sp_lock

27. What are the different types of shells?

Bourne Shell, C-Shell, Korn-Shell

29. What is the difference between Bourne shell and K shell?

Bourne shell is a basic shell which is bundled with all UNIX file systems. Where as

Korn shell is superset of Bourne shell. It has got more added features like alias in

the longest name and longest file name. It has got history command which can

display up to 200 commands.


30. How do you see the CPU utilization on UNIX?

using sar & top

31. How to mount a file system?

with mount <file name>

32. How do you get a port number?

netstat anv |grep 5000

/var/tmp/lsof |grep 5300

33. How do you check the long running transactions ?

using syslogshold

34. What is an Index? What are the types of Indexes?

Index is a separate storage segment created for the table. There are two types of

Indexesthey are clustered index and non-clustered index.

Clustered Index. Vs Non-Clustered Indexes

Typically, a clustered index will be created on the primary key of a table, and non-clustered
indexes are used where needed.

Non-clustered indexes

Leaves are stored in b-tree

Lower overhead on inserts, vs. clustered

Best for single key queries


Last of page index can become a hot spot

249 non cluster indexes per table

Clustered index

Records in table are sorted physically by key values

Only one clustered index per table

Higher overhead on inserts, if re-org on table is required

Best for queries requesting a range of records

Index must exist on same segment as table

Note: With a lock datapages or lock datarows clustered indexes are sorted physically only
upon creation. After that, the indexes behave like non-clustered index.

35.What is your challenging task?

Master database recovery

36. What are the dbcc commands?

The database consistency checker (dbcc) provides commands for checking the logical

and physical consistency of a database. Two major functions of dbcc are:

Checkingpage linkage and data pointers at both page level and row level
usingcheckstorageor checktable and checkdb.

Checking page allocation using checkstorage, checkalloc, checkverify, tablealloc and


indexalloc,dbcccheckstorage, dbcc checktable, dbcc checkalloc, dbcc indexalloc, dbcc checkdb.
37. How to find on Object Name from a Page Number?

dbcc page(dbid,pageno)

38. What is table partitioning?

Is splitting the large tables into smaller, with alter table (table name) partion#

39. What is housekeeping task?

When ASE is idle; it raises the checkpoint that automatically flushes the dirty reads

from buffer to the disk.

40. What are the steps you take if your server process gets slow down?

It is an open-ended answer, as far as I am concerned

first I will check the network speed (ping -t)

then I see the errorlog

I check the indexes

I see the transaction log

tempdb

check when it run last update statistics, if it is not I will update the statistics followed by
sp_recompile.

41. How do you check the Sybase server running from UNIX box?

a. ps ef |grep server name & showserver

42. What are the db_options?


trunk log on checkpoint, abort tran on log full, select into bulk copy / pll sort, single

user, dbo use only, no recovery on checkpoint

43. How do you recover the master database?

First I see that important system tables are taken dumps are clean.

Like sysdevices, sysdatabases, sysusages, sysalternates, syslogins, sysloginroles

Then, I will build the new master device using buildmaster

I will shutdown the server

Restart the server with usermode -m in runserverfile

Load the dumps of 5 important systables

Check the system tables dumped

Restart in normal mode.

44. How do you know particular query is running?

set show plan on

45. How do you put master database in single-user mode?

using m

46 How do you set the sa password?

In runserver file Psa

47. What is hotspot?


Multiple transactions inserting in a single table

48. How do you check the current run level in UNIX?

who r

49. What is defncopy?

It is a utility, used to copy the definitions of all objects of a database. From a database to an
operating system file or from an operating system file to database. Invoke the defncopy program
directly from the operating system. defncopy provides a non-interactive way of copying out
definitions (create statements) for views, rules, defaults, triggers, or procedures from a database
to an operating system file.

50. What is bcp?

It is a utility to copy the data from a table to flat file and vice versa

51. What are the modes of bcp?

Fast bcp & Slow bcp are two modes. bcp in works in one of two modes.

Slow bcp logs each row insert that it makes, used for tables that have one or more

indexes or triggers.

Fast bcp logs only page allocation, copying data into tables without indexes or

triggers at fastest speed possible.

To determine the bcp mode that is best for your copying task, consider the

Size of the table into which you are copying data

Amount of data that you are copying in

Number of indexes on the table


Amount of spare database device space that you have for re-creating indexs

Fast bcp might enhance performance; however, slow bcp gives you greater data recoverability.

52. What are the types in bcp?

bcp in & bcp out

53. What is defrag?

Defrag is deleting the indexes & recreating the indexes. Sothat the gap space will be

filled.

54.What is the prerequisite for bcp?

We need to set select into bulk copy.

55. What is slow bcp?

In this indexes will be on the table.

56. What is fast bcp?

In this there wont be any indexes on the table..

57. Will triggers fires during bcp?

No, trigger wont fire during bcp.


58.What is primary key, foreign key and unique key?

PRIMARY KEY:A primary key is one which uniquely identifies a row


of a table. this key does not allow null values and also
does not allow duplicate values(Primary key is created on clustered index )
Foreigin Key:a foreign key is one which will refer to a
primary key of another table
Unque key:A unique is one which uniquely identifies a row
of a table, but there is a difference like it will not
allow duplicate values and it will any number of allow
null values(In oracle).
it allows only a single null value(In sql server 2000)

Both will function in a similar way but a slight difference


will be there. So, declaring it as a primary key is the
best one.(unique key is non clustered by default)

NOTE:Write a query to find the duplicate rows from a table?


Select name, count(*) from tablename group by name having count(*)>1

Give me the Global variable names for the description given below
1. Error number reported for last SQL statement ( @@error)
2. Current transaction mode (chained or unchained)(@@tranchained)
3. Status of previous fetch statement in a cursor(@@sqlstatus)
4. Transaction nesting level(@@trancount)
5. Current process ID(@@spid)

What is the difference between a sub-query and a correlated sub-query?


Ans: A subquery is a query that SQL Server must evaluate before it can process the main query.
It doesnt depend on a value in the outer subquery.
A correlated subquery is the one that depends on a value in a join clause in the outer subquery.

What command do we use to rename a database?


Ans: sp_renamedb oldname , newname
Well sometimes sp_renamedb may not work you know because if some one is using the db it will
not accept this command so what do you think you can do in such cases? In such cases we can
first bring to db to single user using sp_dboptions and then we can rename that db and then we
can rerun the sp_dboptions command to remove the single user mode.
What is default? Is there a column to which a default cant be bound?
Ans: A default is a value that will be used by a column, if no value is supplied to that column
while inserting data. IDENTITY columns and timestamp columns cant have defaults bound to
them. See CREATE DEFAULT in books online

What is the difference between static and dynamic configuration parameter in Sybase?

In Sybase ASE,when a dynamic configuration parameter is modified the effect takes place
immediately.When a static parameter is modified,the server must be rebooted for the effect to
take place.

NOTE:What does the command sp_helpconfig number of user connections 100 return?

It returns the amount of memory that will be taken by the Sybase ASE server if the parameter is
set to that value.

59. What is candidate key, alternate key & composite key?

Candidate key: A primary key or unique constraint column. A table can have multiple candidate
keys.

Alternate key: Alternate key is a key which is declared as a second key in composite key.

Composite key: An index key that includes two or more columns; for example
authors(au_lname,au_fname)

Candidate Key A Candidate Key can be any column or a combination of columns that can
qualify as unique key in database. There can be multiple Candidate Keys in one table. Each
Candidate Key can qualify as Primary Key.

Primary Key A Primary Key is a column or a combination of columns that uniquely

identify a record. Only one Candidate Key can be Primary Key.

Composite key -A primary key that consistsof two or more attributes is known as

Compositekey

Alternate Key- Any of the candidate keys that is not part of the primary key
is called an
alternate key.

60. Whats the different between a primary key and unique key?
Both primary key and unique enforce uniqueness of the column on which they are

define. But by default, primary key creates a clustered index on the column, where are

unique creates a nonclustered index by default. Another major difference is that,

primary key doesnt allow NULLs, but unique key allows one NULL only.

61. How do you trace H/W signals?

with TRAP command.

62. What is a natural key?

A natural key is a key for a given table that uniquely identifies the row.

63. What are the salient features of 12.5?

i) different logical page sizes (2,4,8,16k)

ii) data migration utility is there.

iii) default database sybsystemdb is added.

iv) compressing the datafiles in a backup server.

v) wider columns

vi) large number of rows

vii) in version 12 we have buildserver, here we have dataserver

64. What are different statistic commands you use in UNIX?

i/o stat, netstat, vmstat, mpstat, psrstat


65. What do you mean by query optimization?

It is nothing but assigning indexes to a table, so that query optimizer will prepare a query

plan for a table & update the values in a table. With this performance increases.

66. What are locks?

A concurrency control mechanism that protects the integrity of data and transaction

results in a multi-user environment. Adaptive Server applies page or table locks to

prevent two users from attempting to change the same data at the same time, and to

prevent processes that are selecting data from reading data that is in the process of

being changed.

67. What are levels of lock?

There are three types of locks:

* page locks

* table locks

* demand locks

Page Locks

There are three types of page locks:

* shared

* exclusive

* update
shared

These locks are requested and used by readers of information. More than one connection can
hold a shared lock on a data page.

This allows for multiple readers.

exclusive

The SQL Server uses exclusive locks when data is to be modified. Only one connection may
have an exclusive lock on a given data page. If a table is large enough and the data is spread
sufficiently, more than one connection may update different data pages of a given table
simultaneously.

update

A update lock is placed during a delete or an update while the SQL Server is hunting for the
pages to be altered. While an update lock is in place, there can be shared locks thus allowing for
higher throughput.

The update lock(s) are promoted to exclusive locks once the SQL Server is ready to perform the
delete/update.

Table Locks

There are three types of table locks:

* intent

* shared

* exclusive
intent

Intent locks indicate the intention to acquire a shared or exclusive lock on a data page. Intent
locks are used to prevent other transactions from acquiring shared or exclusive locks on the
given page.

shared

This is similar to a page level shared lock but it affects the entire table. This lock is typically
applied during the creation of a non-clustered index.

exclusive

This is similar to a page level exclusive lock but it affects the entire table. If an update or delete
affects the entire table, an exclusive table lock is generated. Also, during the creation of a
clustered index an exclusive lock is generated.

Demand Locks

A demand lock prevents further shared locks from being set. The SQL Server sets a demand lock
to indicate that a transaction is next to lock a table or a page.

This avoids indefinite postponement if there was a flurry of readers when a writer wished to
make a change

68. What is deadlock ?

A dead lock occurs when two or more user processes each have a lock on a separate page or table
and each wants to acquire a lock on other processs page or table. The transaction with the least
accumulated CPU time is killed and all of its work is rolled back.

69. What is housekeeper?

The housekeeper is a task that becomes active when no other tasks are active. It writes dirty
pages to disk, reclaims lost space, flushes statistics to systabstats and checks license usage.

70. What are work tables? What is the limit?


work tables are created automatically in tempdb in Adaptive server merge joins, sorts

and other internal processes. There is a limit for work tables to 14. System will create

max of 14 work tables for a query.

71. What is update statistics?

Updates information about distribution of key values in specified indexes or for

specified columns, for all columns in an index or for all columns in a table.

Usage: ASE keeps statistics about the distribution of the key values in each index, and uses these
statistics in its decisions about which indexes to use in query processing.

Syntax: update statistics table_name [[index_name]| [(column_list)]]

[ using step values]

[ with consumers = consumers ]

update index statistics table_name [index_name]

[ using step values]

[ with consumers = consumers ]

The update statistics command helps the server


make the best decisions about which indexes to use when it processes a query, by providing
information about the distribution of the key values in the indexes. The update statistics
commands create statistics, if there are no statistics for a particular column, or replaces existing
statistics if they already exist. The statistics are stored in the system tables systabstats and
sysstatistics.
72. What is sp_recompile?

Causes each stored procedure and trigger that uses the named table to be recompiles the next
time it runs.

Usage: The queries used by stored procedure and triggers are optimized only once, when they
are compiled. As you add indexes or make other changes to your database that affect its
statistics, your compiled stored procedures and triggers may lose efficiency. By recompiling the
stored procedures and triggers that act on a table, you can optimize the queries for maximum
efficiency.

73. What is a difference between a segment and a device?

A device is, well, a device: storage media that holds images of logical pages. A device will have
a row in the sysdevices table.

A fragment is a part of a device, indicating a range of virtual page numbers that have been
assigned to hold the images of a range of logical page numbers belonging to one particular
database. A fragment is represented by a row in sysusages.

A segment is a label that can be attached to fragments. Objects can be associated with a
particular segment (technically, each indid in sysindexes can be associated with a different
segment). When future space is needed for the object, it will only be allocated from the free
space on fragments that are labeled with that segment.

There can be up to 32 segments in a database, and each fragment can be associated with any, all,
or none of them (warnings are raised if there are no segments associated). Sysusages has a
column called segmap which is a bitmapped index of which segments are associated, this maps
to the syssegments table.

What is a segment?
A segment is a label that points to one or more database devices. Segment names are used
in create table and create index commands to place tables or indexes on specific database
devices. Using segments can improve Adaptive Server performance and give the System
Administrator or Database Owner increased control over the placement, size, and space usage of
database objects.

You create segments within a database to describe the database devices that are allocated to the
database. Each Adaptive Server database can contain up to 32 segments, including the system-
defined segments Before assigning segment names, you must initialize the database devices
with disk init and then make them available to the database with create database or alter
database.

Transaction log management in sybase

Sybase Transaction Log Management

Contents

* About transaction logs

* Turning off transaction logging

* What information is logged

* Sizing the transaction log

* Separating data and log segments

* Truncating the transaction log

* Managing large transactions

About Transaction Logs

Most SQL Server processing is logged in the transaction log table, syslogs. Each database,
including the system databases master, model, sybsystemprocs, and tempdb, has its own
transaction log. As modifications to a database are logged, the transaction log continues to grow
until it is truncated, either by a dump transaction command or automatically if the trunc log on
chkpt option is turned on as described below. This option is not recommended in most
production environments where transaction logs are needed for media failure recovery, because it
does not save the information contained in the log.

The transaction log on SQL Server is a write-ahead log. After a transaction is committed, the log
records for that transaction are guaranteed to have been written to disk. Changes to data pages
may have been made in data cache but may not yet be reflected on disk.
WARNING!

This guarantee cannot be made when UNIX files are used as SYBASE devices.

Transaction Logs and commit transaction

When you issue a commit transaction, the transaction log pages are immediately written to disk
to ensure recoverability of the transaction. The modified data pages in cache might not be written
to disk until a checkpoint is issued by a user or SQL Server or periodically as the data cache
buffer is needed by other SQL Server users. Note that pages modified in data cache can be
written to disk prior to the transaction committing, but not before the corresponding log records
have been written to disk. This happens if buffers in data cache containing dirty pages are needed
to load in a new page.

Transaction Logs and the checkpoint Process

If the trunc log on chkpt option is set for a database, SQL Server truncates the transaction log for
the database up to the page containing the oldest outstanding transaction when it issues a
checkpoint in that database. A transaction is considered outstanding if it has not yet been
committed or rolled back. A checkpoint command issued by a user does not cause truncation of
the transaction log, even when the trunc log on chkpt option is set. Only implicit checkpoints
performed automatically by SQL Server result in this truncation. These automatic checkpoints
are performed using the internal SQL Server process called the checkpoint process.

The checkpoint process wakes up about every 60 seconds and cycles through every database to
determine if it needs to perform a checkpoint. This determination is based on the recovery
interval configuration parameter and the number of rows added to the log since the last
checkpoint. Only those rows associated with committed transactions are considered in this
calculation.

If the trunc log on chkpt option is set, the checkpoint process attempts to truncate the log every
sixty seconds, regardless of the recovery interval or the number of log records. If nothing will be
gained from this truncation, it is not done.
Transaction Logs and the recovery interval

The recovery interval is a configuration parameter that defines the amount of time for the
recovery of a single database. If the activity in the database is such that recovery would take
longer than the recovery interval, the SQL Server checkpoint process issues a checkpoint.
Because the checkpoint process only examines a particular database every 60 seconds, enough
logged activity can occur during this interval that the actual recovery time required exceeds the
time specified in the recovery interval parameter.

Note that the transaction log of the tempdb database is automatically truncated during every
cycle of the checkpoint process, or about every 60 seconds. This occurs whether the trunc log on
chkpt option is set on tempdb or not.

Turning Off Transaction Logging

Transaction logging performed by SQL Server cannot be turned off, to ensure the recoverability
of all transactions performed on SQL Server. Any SQL statement or set of statements that
modifies data is a transaction and is logged. You can, however, limit the amount of logging
performed for some specific operations, such as bulk copying data into a database using bulk
copy (bcp) in the fast mode, performing a select/into query, or truncating the log. See the Tools
and Connectivity Troubleshooting Guide and the SQL Server Reference Manual for more
information on bcp. These minimally logged operations cause the transaction log to get out of
sync with the data in a database, which makes the transaction log useless for media recovery.

Once a non-logged operation has been performed, the transaction log cannot be dumped to a
device, but it can still be truncated. You must do a dump database to create a new point of
synchronization between the database and the transaction log to allow the log to be dumped to
device.

What Information Is Logged


When a transaction is committed, SQL Server logs every piece of information relating to the
transaction in the transaction log to ensure its recoverability. The amount of data logged for a
single transaction depends on the number of indexes affected, the amount of data changed, and
the number of pages that must be allocated or deallocated. Certain other page management
information may also be logged. For example, when a single row is updated, the following types
of records may be placed in the transaction log:

* A data delete record, including all the data in the original row.

* A data insert record, including all the data in the modified row.

* One index delete record per index affected by the change.

* One index insert record per index affected by the change.

* One page allocation record per new data/index page required.

* One page deallocation record per data/index page freed.

Sizing the Transaction Log

There is no hard and fast rule dictating how big a transaction log should be. For new databases, a
log size of about 20 percent of the overall database size is a good starting point. The actual size
required depends on how the database is being used; for example:

* The rate of update, insert, and delete transactions

* The amount of data modified per transaction

* The value of the recovery interval configuration parameter

* Whether or not the transaction log is being saved for media recovery purposes

Because there are many factors involved in transaction logging, you usually cannot accurately
determine in advance how much log space a particular database requires. The best way to
estimate this size is to simulate the production environment as closely as possible in a test. This
includes running the applications with the same number of users as will be using the database in
production.

Separating Data and Log Segments

Always store transaction logs on a separate database device and segment from the actual data. If
the data and log are on the same segment, you cannot save transaction log dumps. Up-to-date
recovery after a media failure is therefore not possible. If the device is mirrored, however, you
may be able to recover from a hardware failure. Refer to the System Administration Guide for
more information.

Also, the data and log segments must be on separate segments so that you can determine the
amount of log space used. dbcc checktable on syslogs only reports the amount of log space used
and what percentage of the log is full if the log is on its own segment.

Finally, because the transaction log is appended each time the database is modified, it is accessed
frequently. You can increase performance for logged operations by placing the log and data
segments on different physical devices, such as different disks and controllers. This divides the
I/O requests for a database between two devices.

Truncating the Transaction Log

The transaction log must be truncated periodically to prevent it from filling up. You can do this
either by enabling the trunc log on chkpt option or by regularly executing the dump transaction
command.

WARNING!

Up-to-the-minute recoverability is not guaranteed on systems when the trunc log on chkpt option
is used. If you use this on production systems and a problem occurs, you will only be able to
recover up to your last database dump.
Because the trunc log on chkpt option causes the equivalent of the dump transaction with
truncate_only command to be executed, it truncates the log without saving it to a device. Use this
option only on databases for which transaction log dumps are not being saved to recover from a
media failure, usually only development systems.

Even if this option is enabled, you might have to execute explicit dump transaction commands to
prevent the log from filling during peak loads.

If you are in a production environment and using dump transaction to truncate the log, space the
commands so that no process ever receives an 1105 (out of log space) error.

When you execute a dump transaction, transactions completed prior to the oldest outstanding
transaction are truncated from the log, unless they are on the same log page as the last
outstanding transaction. All transactions since the earliest outstanding transaction are considered
active, even if they have completed, and are not truncated.

Figure 1 illustrates active and outstanding transactions:

Figure: Active transactions and outstanding transactions illustrated

This figure shows that all transactions after an outstanding transaction are considered active.
Note that the page numbers do not necessarily increase over time.

Because the dump transaction command only truncates the inactive portion of the log, you
should not allow stranded transactions to exist for a long time. For example, suppose a user
issues a begin transaction command and never commits the transaction. Nothing logged after the
begin transaction can be purged out of the log until one of the following occurs:
* The user issuing the transaction completes it.

* The user process issuing the command is forcibly stopped, and the transaction is rolled back.

* SQL Server is shut down and restarted.

Stranded transactions are usually due to application problems but can also occur as a result of
operating system or SQL Server errors. See, Managing Large Transactions, below, for more
information.

Identifying Stranded Transactions with syslogshold

In SQL Server release 11.0 and later, you can query the syslogshold system table to determine
the oldest active transaction in each database. syslogshold resides in the master database, and
each row in the table represents either:

* The oldest active transaction in a database or

* The Replication Server truncation point for the databases log.

A database may have no rows in syslogshold, a row representing one of the above, or two rows
representing both of the above. For information about how Replication Sever truncation points
affects the truncation of a databases transaction log, see your Replication Server documentation.

Querying syslogshold can help you when the transaction log becomes too full, even with
frequent log dumps. The dump transaction command truncates the log by removing all pages
from the beginning of the log up to the page that precedes the page containing an uncommitted
transaction record (the oldest active transaction). The longer this active transaction remains
uncommitted, the less space is available in the transaction log, since dump transaction cannot
truncate additional pages.
For information about how to query syslogshold to determine the oldest active transaction that is
holding up your transaction dumps, see Backing Up and Restoring User Databases in the System
Administration Guide.

Managing Large Transactions

Because of the amount of data SQL Server logs, it is important to manage large transactions
efficiently. Four common transaction types can result in extensive logging:

* Mass updates

* Deleting a table

* Insert based on a subquery

* Bulk copying in

The following sections contain explanations of how to use these transactions so that they do not
cause extensive logging.

Mass Updates

The following SQL statement updates every row in the large_tab table. All of these individual
updates are part of the same single transaction.

1> update large_tab set col_1 = 0

2> go

On a large table, this query results in extensive logging, often filling up the transaction log before
completing. In this case, an 1105 error (transaction log full) results. The portion of the
transaction that was processed is rolled back, which can also require significant server resources.
Another disadvantage of unnecessarily large transactions is the number or type of locks held. An
exclusive table lock is normally acquired for a mass update, which prevents all other users from
modifying the table during the update. This may cause deadlocks.

You can sometimes avoid this situation by breaking up large transactions into several smaller
ones and executing a dump transaction between the different parts. For example, the single
update statement above could be broken into two or more pieces as follows:

1> update large_tab set col1 = 0

2> where col2 < x

3> go

1> dump transaction database_name

2> with truncate_only

3> go

1> update large_tab set col1 = 0

2> where col2 >= x

3> go

1> dump transaction database_name

2> with truncate only

3> go

This example assumes that about half the rows in the table meet the condition col2 < x and the
remaining rows meet the condition col2 >= x.
If transaction logs are saved for media failure recovery, the log should be dumped to a device and
the with truncate_only option should not be used. Once you execute a dump transaction with
truncate_only, you must dump the database before you can dump the transaction log to a device.

Delete Table

The following SQL statement deletes the contents of the large_tab table within a single
transaction and logs the complete before-image of every row in the transaction log:

1> delete table large_tab

2> go

If this transaction fails before completing, SQL Server can roll back the transaction and leave the
table as it was before the delete. Usually, however, you do not need to provide for the recovery of
a delete table operation. If the operation fails halfway through, you can simply repeat it and the
result is the same. Therefore, the logging done by an unqualified delete table statement may not
always be needed.

You can use the truncate table command to accomplish the same thing without the extensive
logging:

1> truncate table large_tab

2> go

This command also deletes the contents of the table, but it logs only space deallocation
operations, not the complete before- image of every row.

Insert Based on a Subquery


The SQL statement below reads every row in the large_tab table and inserts the value of columns
col1 and col2 into new_tab, all within a single transaction:

1> insert new_tab select col1, col2 from

large_tab

2> go

Each insert operation is logged, and the records remain in the transaction log until the entire
statement has completed. Also, any locks required to process the inserts remain in place until the
transaction is committed or rolled back. This type of operation may fill the transaction log or
result in deadlock problems if other queries are attempting to access new_tab. Again, you can
often solve the problem by breaking up the statement into several statements that accomplish the
same logical task. For example:

1> insert new_tab

2> select col1, col2 from large_tab where col1

<= y

3> go

1> dump transaction database_name

2> with truncate_only

3> go

1> insert new_tab

2> select col1, col2 from large_tab where col1

>y
3> go

1> dump transaction database_name

2> with truncate_only

3> go

Note

This is just one example of several possible ways to break up a query.

This approach assumes that y represents a median value for col1. It also assumes that null values
are not allowed in col1. The inserts run significantly faster if a clustered index exists on
large_tab.col1, although it is not required.

If transaction logs are saved for media failure recovery, the log should be dumped to a device and
the with truncate_only option should not be used. Once you execute a dump transaction with
truncate_only, you must dump the database before you can dump the transaction log to a device.

Bulk Copy

You can break up large transactions when using bcp to bulk copy data into a database. If you use
bcp without specifying a batch size, the entire operation is performed as a single logical
transaction. Even if another user process does a dump transaction command, the log records
associated with the bulk copy operation remain in the log until the entire operation completes
and another dump transaction command is performed. This is one of the most common causes of
the 1105 error. You can avoid it by breaking up the bulk copy operation into batches. Use this
procedure to ensure recoverability:

1. Turn on the trunc log on chkpt option:


1> use master

2> go

1> sp_dboption database_name,

2> trunc, true

3> go

1> use database_name

2> go

1> checkpoint

2> go

Note

trunc is an abbreviated version of the option trunc log on chkpt.

2. Specify the batch size on the bcp command line. This example copies rows into the
pubs2.authors table in batches of 100:

UNIX bcp -b 100

3. Turn off the trunc log on chkpt option when the bcp operations are complete, and dump the
database.

In this example, a batch size of 100 rows is specified, resulting in one transaction per 100 rows
copied. You may also need to break the bcp input file into two or more separate files and execute
a dump transaction between the copying of each file to prevent the transaction log from filling
up.
If the bcp in operation is performed in the fast mode (with no indexes or triggers), the operation
is not logged. In other words, only the space allocations are logged, not the complete table. The
transaction log cannot be dumped to a device in this case until after a database dump is
performed (for recoverability).

If your log is too small to accommodate the amount of data being copied in, you may want to do
batching and have the sp_dboption trunc log on checkpoint set. This will truncate the log after
each checkpoint.

Sybase Tempdb space management and addressing tempdb log full issues

A default installation of Sybase ASE has a small tempdb located on the master device. Almost all
ASE implementations need a much larger temporary database to handle sorts and worktables and
therefore DBAs need to increase tempdb. This document gives some recommendations how this
can be done and describes various techniques to guarantee maximum availability of tempdb.

Contents

* 1 About Segments

* 2 Prevention of a full logsegment

* 3 Default or system segments are full

* 4 Prevention of a full segment for data

* 5 Separation of data and log segments

* 6 Using the dsync option

* 7 Moving tempdb off the master device


* 8 Summary of the recommendations

About Segments

Tempdb is basically just another database within the server and has three segments (Whats a
segment): system for system tables like sysobjects and syscolumns, default to store objects
such as tables and logsegment for the transaction log (syslogs table). This type of segmentation,
no matter the size of the database, has an undefined space for the transaction log; the only
limitation is the available size within the database. The following script illustrates that this can
lead to nasty problems.

create table #a(a char(100) not null)

go

declare @a int

select @a = 1

while @a > 0

begin

insert into #a values(get full)

end

go

Running the script populates table #a and the transaction log at the same time, until tempdb is
full. Then the log gets automatically truncated by ASE, allowing for more rows to be inserted in
the table until tempdb is full again. This cycle repeats itself a number of times until tempdb is
filled up to the point that even the transaction log cannot be truncated anymore. At that point the
ASE errorlog will show messages like 1 task(s) are sleeping waiting for space to become
available in the log segment for database tempdb. When you log on to ASE to resolve this
problem and you run an sp_who, you will get Failed to allocate disk space for a work table in
database tempdb. You may be able to free up space by using the DUMP TRANsaction
command, or you may want to extend the size of the database by using the ALTER DATABASE
command.

Your first task is to kill off the process that causes the problem, but how can you know which
process to kill if you even cant run an sp_who? This problem can be solved with the lct_admin
function. In the format lct_admin(abort,0,) you can kill sessions that are waiting on a log
suspend. So you do a:

select lct_admin(abort,0,2) 2 is dbid for tempdb.

When you execute the lct_admin function the session is killed but tempdb is still full. In fact its
so full that the table #a cannot be dropped because this action must also be logged in the
transaction log of tempdb. Besides a reboot of the server you would have no other option than to
increase tempdb (alter database)with just a bit more space for the logsegment.

alter database tempdb log on =

This extends tempdb and makes it possible to drop table #a and to truncate the transaction log. In
a real-life situation this scenario could cause significant problems for users.

Prevention of a full logsegment

One of the database options that can be set with the sp_dboption stored procedure can be used to
prevent this. When you do:
sp_dboption tempdb,abort tran on log full,true

(for pre 12.5.1: followed by a checkpoint in tempdb) the transaction that fills up the transaction
log in tempdb is automatically aborted by the server.

[edit]

Default or system segments are full

The default or system segments in tempdb, where the actual data is stored, can also get full, just
like any ordinary database. Your query is cancelled with a Msg 1105: Cant allocate space for
object #a_____00000180017895422 in database tempdb because default segment is full/has
no free extents. If you ran out of space in syslogs, dump the transaction log. Otherwise, use
ALTER DATABASE or sp_extendsegment to increase size of the segment. This message can be
caused by a query that creates a large table in tempdb, or an internal worktable created by ASE
used for sorts, etc. Potentially, this problem is much worse than a full transaction log since the
transaction is cancelled. A full log segment leads to sleeping processes until the problem is
resolved. However, a full data segment leads to aborted transactions.

Prevention of a full segment for data

The Resource Governor in ASE allows you to deal with these circumstances. You can specify
just how much space a session is allowed to consume within tempdb. When the space usage
exceeds the specified limit the session is given a warning or is killed. Before using this feature
you must configure ASE (with sp_configure)to use the Resource Governor:

sp_configure allow resource limits,1

After a reboot of the server (12.5.1. too) you can use limits: (sp_add_resource_limit)
sp_add_resource_limit petersap,null,at all times,tempdb_space,200

This limit means that the user petersap is allowed to use 200 pages within tempdb. When the
limit is exceeded the session receives an error message (Msg 11056) and the query is aborted.
Different options for sp_add_resource_limit make it possible to kill the session when the limit is
exceeded. Just how much pages a user should be allowed to use in tempdb depends on your
environment. Things like the size of tempdb, the number of concurrent users and the type of
queries should be taken into account when setting the resource limit. When a resource limit for
tempdb is crossed it is logged into the Sybase errorlog. This makes it possible to trace how often
a limit is exceeded and by who. With this information the resource limit can be tuned. When you
use multiple temporary databases the limit is enforced on all of these.

Separation of data and log segments

For performance reasons it makes sense to separate the system+default and the logsegment from
each other. Not all sites follow this policy. Its a tradeoff between flexibility to have data and log
combined and some increased performance. Since tempdb is a heavily used database its not a bad
idea to invest some time into an investigation of the space requirements. The following example
illustrates how tempdb could be configured with separate devices for the logsegment and the
data. The example is based on an initial setting of tempdb on the master device. First we increase
tempdb for the system and data segments:

alter database tempdb on =

Then we extend tempdb for the transaction log:

alter database tempdb log on =

When you have done this and run an sp_helpdb tempdb you will see that data and log are still
on the same segment. Submit the following to resolve this: (sp_logdevice)
sp_logdevice tempdb,

Please note that tempdb should not be increased on the master device.

Using the dsync option

The dsync option for devices allows you to enable/disable I/O buffering to file systems. The
option is not available for raw partitions and NT files. To get the maximum possible performance
for tempdb use dedicated device files, created with the Sybase disk init command. The files
should be placed on file system, not on raw partitions. Set the dsync option off as in the
following example: (disk init)

disk init name = tempdb_data,

size= 500M,

physname= /var/sybase/tempdb_data.dat,

dsync = false

Moving tempdb off the master device

When you have increased tempdb on separate devices you can configure tempdb so that the
master device is unused. This increases the performance of tempdb even further. There are
various techniques for this, all with their pros and cons but I recommend the following. Modify
sysusages so that segmap will be set to 0 for the master device. In other words, change the
segments of tempdb so that the master device is unused. This can be done with the following
statements:
sp_configure allow updates to system tables,1

go

update master..sysusages

set segmap = 0

where dbid = 2

and lstart = 0

go

sp_configure allow updates to system tables,0

go

shutdown reboot now!

go

When you use this configuration you should know the recovery procedure just in case one of the
devices of tempdb gets corrupted or lost. Start your ASE in single user mode by adding the -m
switch to the dataserver options. Then submit the following statements:

update master..sysusages

set segmap = 7

where dbid = 2

and lstart = 0

go

delete master..sysusages

where dbid = 2

and lstart > 0


go

shutdown reboot now!

go

Remove the -m switch from the dataserver options and restart ASE. Your tempdb is now
available with the default allocation on the master device.

Summary of the recommendations

* Increase tempdb from its initial size to a workable value

* Set the option abort tran on log full for tempdb to on

* Create resource limits

* Place data and log segments on separate devices

* Place tempdb on filesystem with dsync set to false

* Move tempdb off the master device by modifying the segmap attribute

74. Do we have to create sp_thresholdaction procedure on every segment or every database


or any other place!?

You dont *have* to create threshold action procedures for any segment, but you *can* define
thresholds on any segment. The log segment has a default last chance threshold set up that
will call a procedure called sp_thresholdaction. It is a good idea to define sp_thresholdaction,
but you dont have to if you dont you will just get a proc not found error when the log fills
up and will have to take care of it manually.

Thresholds are created only on segments, not on devices or databases. You can create
them in sysprocedures with a name starting like sp_ to have multiple databases share
the same procedure, but often each database has its own requirements so they are
created locally instead.

Determining Free Log Space in Sybase ASE


Determining Unused Log Space

Use dbcc checktable(syslogs) for an accurate check of free space in Sybase Adaptive Server
Enterprise.

[SUB]A bit old though[/SUB]

Contents

When you need to check free space in the server logs, users typically use the stored procedure
sp_helpdb. While sp_helpdb is useful for a general estimation of free space, for a precise figure
use one of the following methods:

* dbcc checktable (syslogs)

* Determine the number of data pages in the transaction log via isql script, for example:

select data_pgs (8, doampg)

from sysindexes where id=8

go

Each method has advantages.

Sybase recommends sp_helpdb for most situations because it reports quickly. sp_helpdb uses the
unreserved page count in sysusages. However, unreserved page count is updated intermittently
and therefore may not accurately reflect the actual state of the database. Thus, when sp_helpdb
reports free space, when you perform an insert you may run out of space, resulting in error
message 1105, which reads in part:
Cant allocate space for object because log segment full

If this error occurs, follow the instructions in Runtime 1105 Errors: State 3 in the Error Message
Writeups chapter of the Adaptive Server Enterprise Troubleshooting and Error Messages Guide.

The dbcc checktable (syslogs) command also checks for possible corruption as well as the size of
the log. However, it can take a long time to run, depending on the size of the log. For more
information about dbcc checktable, see the chapter, Checking Database Consistency in the
Adaptive Server Enterprise System Administration Guide.

The isql script is more accurate than sp_helpdb. It is described in the Error 1105 section in Error
Message Writeups chapter of the Adaptive Server Enterprise Troubleshooting and Error
Messages Guide.

75.When to run a reorg command?

reorg is useful when:

A large number of forwarded rows causes extra I/O during read operations.

Inserts and serializable reads are slow because they encounter pages with noncontiguous free
space that needs to be reclaimed.

Large I/O operations are slow because of low cluster ratios for data and index pages.

sp_chgattribute was used to change a space management setting (reservepagegap, fillfactor, or


exp_row_size) and the change is to be applied to all existing rows and pages in a table, not just to
future updates.
76. What are the most important DBA tasks?

In my opinion, these are (in order of importance): (i) ensure a proper database / log dump
schedule for all databases (including master); (ii) run dbcc checkstorage on all databases
regularly (at lease weekly), and follow up any corruption problems found; (iii) run update
[index] statistics at least weekly on all user tables; (iv) monitor the server errorlog for messages
indicating problems (daily). Of course, a DBA has many other things to do as well, such as
supporting users & developers, monitor performance, etc.,

77. What is bit datatype and whats the information that can be stored inside a bit column?

bit datatype is used to store Boolean information like 1 or 0 (true or false). Until SQL Server 6.5
bit datatype could hold either a 1 or 0 and there was no support for NULL. But from SQL Server
7.0 onwards, bit datatype can represent a third state, which is NULL.

78. What are different types of triggers?

Trigger is an event. That gets fires when an event occurs, such as Insert, Delete, Update. There
are 3 types of triggers available with Sybase.

How triggers work in Sybase

Triggers: Enforcing Referential Integrity

How triggers work

Triggers are automatic. They work no matter what caused the data modificationa clerks data
entry or an application action. A trigger is specific to one or more of the data modification
operations (update, insert, and delete), and is executed once for each SQL statement.

For example, to prevent users from removing any publishing companies from the publishers
table, you could use this trigger:
create trigger del_pub

on publishers

for delete

as

begin

rollback transaction

print You cannot delete any publishers!

end

The next time someone tries to remove a row from the publishers table, the del_pub trigger
cancels the deletion, rolls back the transaction, and prints a message.

A trigger fires only after the data modification statement has completed and Adaptive Server
has checked for any datatype, rule, or integrity constraint violation. The trigger and the statement
that fires it are treated as a single transaction that can be rolled back from within the trigger. If
Adaptive Server detects a severe error, the entire transaction is rolled back.

Triggers are most useful in these situations:

* Triggers can cascade changes through related tables in the database. For example, a
delete trigger on the title_id column of the titles table can delete matching rows in other tables,
using the title_id column as a unique key to locating rows in titleauthor and roysched.

* Triggers can disallow, or roll back, changes that would violate referential integrity,
canceling the attempted data modification transaction. Such a trigger might go into effect when
you try to insert a foreign key that does not match its primary key. For example, you could create
an insert trigger on titleauthor that rolled back an insert if the new titleauthor.title_id value did
not have a matching value in titles.title_id.

* Triggers can enforce restrictions that are much more complex than those that are defined
with rules. Unlike rules, triggers can reference columns or database objects. For example, a
trigger can roll back updates that attempt to increase a books price by more than 1 percent of the
advance.

* Triggers can perform simple what if analyses. For example, a trigger can compare the
state of a table before and after a data modification and take action based on that comparison.

Triggers in Sybase

Trigger is a special type of SP that gets executed automatically when any DML operation takes
place on a table.

* Triggers are used to enforce referential integrity.

* Triggers are used to cascade changes to related tables.

* Triggers can be used to apply complex restrictions than that enforced using rules.

* Trigger can perform analysis before and after changes to the table.

Triggers cannot have the following:

1. create and drop commands.

2. alter table, alter database, truncate table.

3. Load database and transactions.

4. Grant and revoke statements.

5. update statistics

6. reconfigure

7. disk init, disk mirror, disk refit, disk reinit, disk remirror, disk unmirror

8. select into

How to Create Trigger in Sybase


create trigger emp_trigger

on emp

for insert, update, delete

as

Trigger Example

create trigger emp_trigger

on emp

for delete

as

delete payment

from payment, deleted

where payment.empid = deleted.empid

79. How many triggers will be fired if more than one row is inserted?

The numbers of rows you are inserting into a table, that many number of times trigger gets fire.

80. What is advantage of using triggers?

To maintain the referential integrity.

81. How do you optimize a stored procedure?

By creating appropriate indexes on tables. Writing a query based on the index and how to pick
up the appropriate index.
82. How do you optimize a select statement?

Using the SARGs in the where clause, checking the query plan using the set show plan on. If
the query is not considering the proper index, then will have to force the correct index to run the
query faster.

83. How do you force a transaction to fail?

By killing a process you can force a transaction to fail.

84. What are constraints? Explain different types of constraints?

Constraints enable the RDBMS enforce the integrity of the database automatically, without
needing you to create triggers, rule or defaults.

Types of constraints: NOT NULL, CHECK, UNIQUE, PRIMARY KEY, FOREIGN KEY

85. What are the steps you will take to improve performance of a poor performing query?

This is very open ended question and there could be a lot of reasons behind the poor performance
of a query. But some general issues that you could talk about would be: No indexes, table scans,
missing or out of date statistics, blocking, excess recompilations of stored procedures,
procedures and triggers without SET NOCOUNT ON, poorly written query with unnecessarily
complicated joins, too much normalization, excess usage of cursors and temporary tables.

Some of tools /ways that help you trouble shooting performance problems are :
SET SHOWPLANON

86. What would you do when the ASE servers performance is bad?

Bad performance is not a very meaningful term, so youll need to get a more objective
diagnosis first. Find out (i) what such a complaint is based on (clearly increasing response time
or just a feeling that its slower?). (ii) for which applications / queries / users this seems to be
happening, and (iii) whether it happens continuously or just incidentally. Without identifying the
specific, reproducible problem, any action is no better than speculation.

87. What you do when a segment gets full?

Wrong: a segment can never get full (even though some error messages state something to that
extent). A segment is a label for one or more database device fragments; the fragments to
which that label has been mapped can get full, but the segments themselves cannot. (Well, Ok,
this is a bit of trick question when those device fragments full up, you either add more space,
or clean up old / redundant data.)

88. Is it a good idea to use data rows locking for all tables by default?

Not by default, only if youre having concurrency (locking) problems on a table, and youre not
locking many rows of a table in a single transaction, then you could consider datarows locking
for that table. In all other cases, use either data pages or all pages locking.

(data pages locking as the default lock scheme for all tables because switching to datarows
locking is fast and easy, whereas for all pages locking, the entire table has to be converted which
may take long for large tables. Also, datapages locking has other advantages over all pages, such
as not locking index pages, update statistics running at level 0, and the availability of the reorg
command)

89. Is there any advantage in using 64-bit version of ASE instead of the 32-bit version?

The only difference is that the 64-bit version of ASE can handle a larger data cache than the 32-
bit version, so youd optimize on physical I/O. Therefore, this may be an advantage if the
amount of data cache is currently a bottleneck. Theres no pint in using 64-bit ASE with the
same amount of total memory as for the 32-bit version, because 64-bit ASE comes with an
additional overhead in memory usage so that net amount of data cache would actually be less
for 64-bit than 32-bit in this case.

90. What is difference between managing permissions through users and groups or through
user-defined roles?
The main difference is that user-defined roles (introduced in ASE 11.5) are server-wide and are
grated to logins. Users and groups (the classic method that has always been there since the first
version of Sybase) are limited to a single database. Permission can be grated / revoked to both
user-defined roles and users / groups. Whichever method you choose, dont mix m, as the
precedence rules are complicated.

91. How do you BCP only a certain set of rows out of a large table?

If youre in ASE 11.5 or later, create a view for those rows and BCP out from the view. In earlier
ASE versions, youll have to select those rows into a separate table first and BCP out from that
table. In both cases, the speed of copying the data depends on whether there is a suitable index
for retrieving the rows.

92. What are the main advantages and disadvantages of using identity columns?

The main advantage of an identity column is that it can generate unique, sequential numbers very
efficiently, requiring only a minimal amount of I/O. The disadvantage is that the generated
values themselves are not transactional, and that the identity values may jump enormously when
the server is shutdown the rough way (resulting in identity gaps). You should therefore only
use identity columns in applications if youve addressed these issues (go here for more
information about identity gaps).

93. Is there any disadvantage of splitting up your application data into a number of
different databases?

When there are relations between tables / objects across the different databases, then there is a
disadvantage indeed: if you would restore a dump of one of the databases, those relations may
not be consistent anymore. This means that you should always back up a consistent set of
databases is the unit of backup / restore. Therefore, when making this kind of design decision,
backup/restore issues should be considered (and the DBA should be consulted).

94.How do u tell the data time of server started?

select Server Start Time = crdate from master..sydatabases where name = tempdb or
select * from sysengines

95. How do your move tempdb off of the master device?

This is Sybase TS method of removing most activity from the master device :

Alter tempdb on another device:

alter database tempdb on

go

drop the segments

3> sp_dropsegment default, tempdb, master

4> go

5> sp_dropsegment logsement,tempdb,master

6> go

7> sp_dropsegment system, tempdb, master

8> go

96. We have lost the sa password, what can we do?

Most people use the sa account all of the time, which is fine if there is only ever one dba
administering the sytem. If you have more than one person accessing the server using the sa
account, consider using sa_role enabled accounts and disabling the sa account. Funnily
enough, this is obviously what Sybase think because it is one of the questions in the certification
exams.
If you see that some is logged using the sa account or is using an account with sa_role
enabled, then you can do the following:

sp_configure allow updates to system tables,1

go

update syslogins set password =null where name = sa

go

sp_password null,newPassword

go

97. What are the 4 isolation levels, which was the default one?

Level 0 read uncommitted/ dirty reads

Level 1 read committed default.

Level 2 repeatable read

Level 3 serializable

98. Describe differences between chained mode and unchained mode?

Chained mode is ANSI-89 complaint, where as unchained mode is not.

In chained mode the server executes an implicit begin tran, where as in unchained mode an
explicit begin tran is required.
99. dump transaction with standby_access is used to?

provide a transaction log dump with no active transactions

100. Which optimizer statistics are maintained dynamically?

Page counts and row counts.

Morgan Stanley, Telephonic round Sybase Interview Questions

Guys, I have collected some Sybase interview questions from the folks who attended Morgan
Stanley, Mumbai interview recently. Please try to post correct answers so that everyone benefits
from this.

1.Explain Performance tuning issues you recently worked on?

2.How to check the query plan and how to get the query plan without executing the query?

3. Diff .Clustered and non-clustered and when to create them. Number of clustered / non
clustered indexes that can be created on a specific table?

4.Types of locks in sybase, Is shared on shared lock, shared on exclusive, exclusive on exclusive
lock possible?

5.How to identify which process created a dead lock situation?

6.What is the default isolation level in sybase and what is the purpose of using isolation levels?

7.Which performs well a join or subquery? from memory perspective?

8.Discussion on a query having not in clause w.r.t performance tuning?

9.What performs well not in or not exists?

10.How many parameters can a Stored Procedure return?

11.Can you BCP out a table having 10 million rows or more?

12.Difference between truncate and delete?


13.What is the purpose of with check option in views?

14.If the table doesnt have an index, will Sybase allow to create a updatable view on it?

15.Multitable views- how update works?

2. Set noexec on and set showplan on

5. Print deadlock information to sybase log but this can degrade sybase performance.

6. Default isolation level is 1. isolation levels specifies the kinds of interactions that are not
permitted while concurrent transactions are executingthat is, whether transactions are isolated
from each other, or if they can read or update information in use by another transaction. Sybase
supports 4 isolation levels level 0 (read uncommitted), level 1( read committed), level
2(repeatable read) and level 3( serializable read)

11. max file size limit gets exceeded if 10 million or more rows are bcpd out. To avoid that we
can use -F and -L options of bcp utility to take bcp out to multiple files

13. with check option restricts the rows that can be updated or inserted on the where clause

create view vw_ca_authors as

select au_id, au_lname, au_fname, phone, state

from authors

where state = CA

with check option

can only insert or update to CA so this insert fails


insert vw_ca_authors values (111-222-3333, Smith, John, 453-2343, NY)

15. Only one table can be updated at a time and view has no with check option

7. If memory is ample then Joins are preferrable. Join has a better performance over sub-queries
as subqueries involves creation of intermediate tables and more I/O.

RBS Sybase Interview questions

1 ) Using the below mentioned query,You can find the duplicate values:

select column from table group by column having count(column) > 1

2) We can use the sp_lock and sp_familylock to see the locks are avaliable in database.

3) select *from sysprocesses ( Here we can the cpu utilization,Engine number and Blocked
processes) or sp_who

4) If you want to improve the performance of this query ,we have to create index on the same.

5) We can analyze the table using the query plan of that table (sp_showplan)

6) Need to Check

7) Truncate : It will truncate the data of the table.

Drop : It will remove the data and syntax as well.

Hope it will help for you. Pls ignore If i am wrong.

SYBASE INTERVIEW QUESTIONS Accenture

-
1. What databases are created in Sybase by default when installed?

master, tempdb, model

model > template to provide attributes to various databases. Need an example??

2. What types of temp tables are created?

#abc, tempdb..abc

# tables life time is within the sp it is created or until session is open;

when the session is closed all the # tables will be dropped.

3. When will tempdb..xxx tables be dropped?

When the sybase server is bounced.

4. What happens exactly when the sybase server is bounced? How are the tempdb.. tables
dropped?

??? The tempdb database is recreated from model database.

5. Have you used user defined datatypes? What are they?

??? sp_addtype tid, char(6), not null

6. What are the advantages of sp?

Fast -> why?


a. Network traffic.

b. Faster execution. Becuase it is alredy compiled/

c. Query plan can exist in procedure cache.

7. When will the query plan of a sp be created?

compile time or when executed first time.

8. What exactly happens when the sp is created?

???

9. Where is the sp stored when created?

syscomments > what else are stored in syscomments?

Performance Tuning:

10. How will you start performance tuning?

11. How sybase will decide which index to use?

based on the statistics stored in sysstatistics.

12. What are deffered update and direct update?


In a update statement when the table that is getting updated is joined, then inorder not to join
the updated data again

and goes on in infinite join, Sybase deffers the update to table until all rows are scanned. I
think it stored the intermediary

result in a work table.

Direct update is something which is updated realtime.

13. How to get query plan? How to get the query plan if I dont want to execute the query?

SET SHOWPLAN ON

SET NOEXEC ON

SET FMTONLY ON

14. How error handling is done in stored procedure?

@@error variable not equal to zero when there is a error in the just executed SQL.

15. How will you pass the error message from stored procedure to the application program?

??? may be db interface

16. What are different modes of transaction?

Chained mode and unchained mode.

default is unchained. set chained on to set mode of transction.

http://manuals.sybase.com/onlinebooks/group-
as/asg1250e/sqlug/@Generic__BookTextView/53713;pt=52735/*
17. How does sybase internally manages a transaction?

@@trancount, transaction log

18. In a nested transaction, if you issue a rollback at the end all transactions are rolled back. How
does sybase do this?

???

19. What are different locking schemes in Sybase?

Allpages locking, which locks datapages and index pages

Datapages locking, which locks only the data pages

Datarows locking, which locks only the data rows

http://infocenter.sybase.com/help/index.jsp?
topic=/com.sybase.dc20021_1251/html/locking/X25549.htm

???

20. How do you define what lock to be applied when defining a table?

CREATE TABLE abc (c1 int, c2 int) lock <datarows/datapages/allpages>

create table table_name (column_name_list)


[lock {datarows | datapages | allpages}]

21. What is the difference between Row level, Page level, Table level locks? Which is preferred?

???

22. What is the default locking scheme is Sybase? Why Sybase decide to use this?

??? page level Allpages locking is the default locking scheme

23. How update lock works?

??? Sybase expects a row in page then creates a page lock till finds the actual row, then
creates a exclusive lock.

24. Which lock should be used? Which is faster (or something like that he asked)?

??? For wide data range selection or update it is page level lock, else row level locking.

24. What are different types of joins?

Simple join, self join, outer join

25. If monitoring tool is not installed how will you indentify the slow sql in a application?

what are the sys tables that can help us???

26. How do you quickly provide a solution for a performance issue?

AQP?
27. How will you apply AQP to a query within a stored procedure?

??

28. What are the tools available in Sybase for performance tuning?

Force plan, index covering, ?

29. What are indexs and types? Diff between clustered and non-clustered index?

http://www.sybaseteam.com/showthread.php?tid=405

30. What are disadvantages of clustered index?

Table need to sorted if the table is inserted/amended/deleted often.

31. In Stored Procedure, what is the use of with recompile option?

Every time the sp is executed a new set of query plan is created. Used when data in the
tables of

sp change drastically/dynamically.

32. What is sp_recompile? When will you use this?

??? Causes each stored procedure and trigger that uses the named table to be recompiled the
next time it runs.

sp_recompile objname
33. What are the advantages of views?

abstraction; not all data of the same table can be shown to the user.

He was asking one basic advantage you missed what is that?

http://sqlserverpedia.com/wiki/Views_-_Advantages_and_Disadvantages

34. What is with check option on views?

So that the column updated is visible for the view.

35. When a new column is added to the table and there is a view with that table say select *
from table,

when you execute the view will it include the new column?

No, because the select * would internally get expanded to individual columns and hence
view will not know about the new column.

36. When a user manually update a column, say flag, in the table (there may be many other
columns), then it should be validated?

How will you do this?

Trigger

create trigger flag_trig

on emp

for update

as

begin
if exists (select 1 from deleted D, inserted I where D.flag != I.flag)

BEGIN

SQL Statements . /* Some action */

END

37. How does a update trigger work?

magic tables deleted and inserted.

38. What are different BCP types? What are the options available?

??? Fast Bcp (removing triggers, indexes on table and then bcp) and slow bcp

39. What is a batch option in BCP? When -b option is not given and when you bcp in 4 million
records what happens?

??? Transaction log blows as bcp is logged operation. Long open transaction also creates
problem.

40. What happens exactly when a BCP with batch option is done?

???

41. What is the use of identity column? Can we give our own value? Do you know of identity
gaps?
???sequential entry by sybase, yes we can give our own value. some rows deleted in
between.

42. UNION and UNION ALL? What is the difference which is faster?

Union gets distince

Union all duplicates. This is faster.

43. What is correlated sub query? What happens exactly in a correlated sub query?

44. In isql utility what is o and i option?

o to output the query result to a file.

i to execute set of sqls stored in a file.

45. How can I ignore duplicates while loading data through BCP?

Create a index with ignore_dup_key option.

46. What is the difference between a User and a Login in Sybase?

??? login is authentication to server. user is for database.

47. What are the system tables have you seen so far?

sysobjects, syscolumns, sysindex, syscomments,sysqueryplans,sysdevices

48. How do you know all the processes in a SYBASE?


sysprocesses

UNIX

1. What is AWK and why do we need it?

2. How to find a string in a file?

grep

3. What is SED?

4. How do you know all the processes in a UNIX?

ps

What will happen when an SQL Statement submitted to ASE?

SQL statement is parsed by the language processor

SQL statement is normalized and optimized

SQL is executed

Task is put to sleep pending lock acquisition and logical or physical I/O

Task is put back on runnable queue when I/O returns

Task commits (writes commit record to transaction log)


Task is put to sleep pending log write

Task sends return status to client

Status Values
Reported by
sp_who

Status Condition Effects of kill Command

recv sleep waiting on a network read immediate

send sleep waiting on a network send immediate

waiting on an alarm, such as


alarm sleep immediate
waitfor delay 10:00

lock sleep waiting on a lock acquisition immediate

waiting disk I/O, or some


other resource. Probably killed when it wakes up, usually immediate; a
sleeping indicates a process that is few sleeping processes do not wake up, and
running, but doing extensive require a Server reboot to clear
disk I/O

in the queue of runnable


runnable immediate
processes

actively running on one on the


running immediate
Server engines
Server has detected serious
kill command not recommended. Server reboot
infected error condition; extremely
probably required to clear process
rare

a process, such as a threshold immediate; use kill with extreme care.


background procedure, run by SQL Server Recommend a careful check of sysprocessesbefore
rather than by a user process killing a background process

killed when it wakes up: 1) when space is freed


processes suspended by
in the log by a dump transactioncommand or 2)
log suspend reaching the last-chance
when an SA uses the lct_adminfunction to wake
threshold on the log
up log suspend processes

Only a System Administrator can issue the kill command: permission to use it cannot be
transferred.

T-SQL Query to get all the tables and lock scheme info.

The following Query gives list of all the User Tables and locking scheme of the Table.

select Table=left(name,32), lock_scheme= case


when (sysstat2 & 57344) < 8193 then APL
when (sysstat2 & 57344) = 16384 then DPL
when (sysstat2 & 57344) = 32768 then DRL
end
from sysobjects
where type=U
order by

Adaptive Server Enterprises :

Q1: Please let me know system db names, what is the purpose of sybsystemdb?

Q2: Suppose our tempdb is filling up or filled up, you cant recycle the db server, then what
would be your steps?

Q3: Business Team(AD) is reporting the query slow performance, how will you investigate, pls
consider all case. (Hint: memory, stats, indexes,reorg,locks etc)
Q4: Suppose our temdb is not recovered ,can we create new database?

Q5: We have configured 7 dataserver engines for our PROD server(we have sufficient cpus),
still we are facing the performance hit? Possible root cause?

Q6: Suppose we are doing the ASe 15 upgrade by dump & load , and in 12.5 server having 2000
logins. Since syslogins having different table structure in both enviorment, we cant use bcp, how
will we move these logins from 12.5 to 15.0?

Q7: Which feature of ASE15.o most impressed you and why?

Q8: What is your orgs backup policy, what is dump tran with standby_access?

Q9: What is log suicide ?

Q10: When we require log suicide of a DB?

Q11: What is the bypass recovery, when we require the bypass recovery?

Q12: What is the difference between shutdown and shutdown with no_wait, besides the
immediate shutdown difference.

Q13: Suppose In our one database huge trans are going on, we issued the shutdown with
no_wait . Will it hit the server restart and how?

Q14: Whats the named data cache, what is buffer pooling and how the cache hit effects the
system performance ?

Q15: We are getting stack traces for one of our databases? How will you investigate?

Q16: Is object level recovery possible in ASE?

Q17: What is the difference between sysstats and systabstats table?

Q18: What is histogram and what its default step value?

Q19: Why we requires non default step value in histogram ?

Q20: Can we run the update stat on one table one two step(half table in first time and after that
rest half of table)?

Interview Questions on User Management & Permissions

1. What is sybase security model for any user/login?

2. What is the diffrence between syslogins and sysusers?


3. How can we add the login in ase? What are the required parameter of sp_addlogin?

4. What are aliases?

5. Whats the diff between role and group and which one is better?

6. How can we sync the logins from prod to uat server, how many tables we need take care for
the login sync?

7. Whats suid mismatch?

8. Why do we require aliases?

9. Whts the importance of sysrole table in each database?

10. Explain syslogins syssrvroles, sysloginroles and sysroles and whts the linkup among all?

11. What is proxy authorization?

12. During the refresh from PROD -> UAT env,tables which we require to take care?

13. Explain about sysprotect tabel and sp_helprotect sp?

14. Can we change the password of other login, if yes, how?

15. What is the role required for user management?

16. diffrence b/w 12.5 syslogins and 15.5 syslogins?

17. What is guest user in database and why we require guest user?

18. What is the keycustodian_role in ASE 15.5?

19. How can we include the passwordpolicy? explain sp_passwordpolicy?

20. Can we include password history feature? From which version it is avilable and how can we
do that?

21. Can we include one sql proc which exceute during login and how can we do that?

New Ques on 21st Feb 2011

1. How can we get the compression level information from the dump files?

2. What is the difference between update and exclusive locks?


3. What is isolation level in ASE? And default value of isolation level.?

4. How can we avoid the deadlock in the database?

5. Is there any way to print the deadlock information in the errorlog?

6. Give the two benefits for creating the database using for load option?

7.What are new features of the Sybase 15? And let me know which you are using in your day to
day operations?

8. What is the joining order in ASE ( suppose we have 4-5 tables with different size)?

9. What difference between sysmon and MDA table ?

10 . Can we take the output of sybmon in a table?

Replication Server:

Q1: How can we know, the current ASE and Replication Server Setup is warm standby setup or
not?

Q2: What is the function of SQM and SQT?

Q3: What is the 1TP & 2TP?

Q4: In how many ways we can know the tran details which is causing the thread down?

Q5 : Pls explain the functionality of rep server starting from PDB logs to RDB

Q6: What is the diff between DSI and DSI EXEC thread?

Q7: Can we dump the queues?

Q8: Suppose our queues are filling up, in next 2 hrs 100% would be fill, how will you investigate
and steps for troubleshooting?

Q9: How can we know RSSD server name from replication Server?

Q10: What is the importance of materialize & de-materialize queue?

Q11: What is DIST thread of Replication server?


Q12: What is the difference between connection and route?

Q13: What is the purpose of ID server in replication setup?

Q14: What is switch active ?

New Questions:

What is the diffrence between sp_setreplicate and sp_setreptable?

What is the diffrence between route and connections?

How can we check the current replication setup whether it is WS , table level or db level?

What would be the impact of long running tran running in PDB in whole replication setup?

suppose there is temp table in sp and we want to replicate it?

What is the importance of rs_locator table in replication server?

What is dbcc settruc ltm, valid/ignore? When we use this dbcc command?

What is diffrence between rs_zeroltm and dbcc settrucn ltm,valid?

What are the diffrent users in common replication setup?

What is rs_subcmp?

New Question on 21st Feb

1. What are the routes?

2. How routes can enhance the performance?

3. What is function string?

4. Replication queues are filling up, Where we need to look into for root cause?

5. If DSI is down , how can we make it up? Whats rs_exception?

6. In an table level replication setup, we need to alter a coloum, what would be the step for the
same?

7. Suppose there is size mismatch between table data and replication def between cols? What
will happen?
8. How can we refresh a database in the replication enviorment?

9. What factor affecting the replication agent performance in primary database?

10. How can we do the master database replication? Is it possible? What information we can
replicate?

New Questions on 11th march 2011


=============================
What is Identity Colum?
What is the advantage and disadvantage of Identity coloums?
From performanace point of view ,which is better if exists or if not exists?
How can we avoid fragmentation in table?
There is update statement on one APL and one DOL table. Which one would be fatser?Consider
the cases: where clause on index cluster index coloum , other case not using any index.
Why the reorg is faster on DOL table as compare cluster index rebuild on APL?
Wht cluster index with sorted_data on APL is faster than reorg rebuild in DOL?
What is Sybase recommendation for tempdb size, suppose we have 300GB , 150GB dbs are
inserver, wht would be the sybase recommendation for sizing of tempdb?
Whats the difference between dsysnc and direct io?
Suppose we are not concerning about the recovery of the database, which would be better for
performance dsync(on/off) or direct io and why?
Whats the asynchronus prefetch ? How is it helping in performance enhance?
We having a 4k page size server, what can be possible pool size in the server?
As Sybase recommends 4K size pool for log usage in 2k page size server , please let me know
the pool recommendtaion for 4K pagesize server?
How can we reduce the spinlock without partioning the data cache?
Can we have the spinlock contention with single engine?
In sysmon report what are the five segment you will be looking for performance?
Whta is meta data cache?
Whta is the archive database?
How can we enable the acrhive database for compresssed backup?
Hows the object level recovery is possible in ASE?
How can we find the culprit spid which has filled up th etempdb database?
How can we find the culprit spid which is badly used the log segment of tempdb?
Whats partioning? How partioning helping in increaeing the performance?
Suppose a table is partioned based on a coloum, how dataserver will be handle the insert on the
table?
Apart from the query plans, wht else resides in proc cache?
What is new config param optimization goal? Whats the parameter we need to provideit?
User is experiancing very slow performace, what can be the reason for this slowness?
What is engine affinity and how can set the engine affinity?
If there are 3 cpus in the box, how many engine we can configure ?
Suppose dataserver is running very slow and sp_monitor is showing 100% cpu usages, what can
be possible issue? Where will you look at?
What is the error classes in replication server?
What is the diffrence between Warm standby and table level replication?
Can you please let me know five case when the thread goes down in replication?
What are triggers? What are type of triggers and how many triggers can we configure on a table?
What are diffrecnt locking scheme in ASE and what are the latches?
How can we dump a replication queue

How to Perform SQL Server Row-by-Row Operations Without Cursors

BY DAVIDVANDESOMPELE

SQL cursors have been a curse to database programming for many years because of their poor
performance. On the other hand, they are extremely useful because of their flexibility in allowing
very detailed data manipulations at the row level. Using cursors against SQL Server tables can
often be avoided by employing other methods, such as using derived tables, set-based queries,
and temp tables. A discussion of all these methods is beyond the scope of this article, and there
are already many well-written articles discussing these techniques.

The focus of this article is directed at using non-cursor-based techniques for situations in which
row-by-row operations are the only, or the best method available, to solve a problem. Here, I will
demonstrate a few programming methods that provide a majority of the cursors flexibility, but
without the dramatic performance hit.

Lets begin by reviewing a simple cursor procedure that loops through a table. Then well
examine a non-cursor procedure that performs the same task.

if exists (select * from sysobjects where name = NprcCursorExample)

drop procedure prcCursorExample

go

CREATE PROCEDURE prcCursorExample

AS

/*

** Cursor method to cycle through the Customer table and get Customer Info for each iRowId.

**

** Revision History:

**
** Date Name Description Project

**

** 08/12/03 DVDS Create -

**

*/

SET NOCOUNT ON

declare all variables!

DECLARE @iRowId int,

@vchCustomerName nvarchar(255),

@vchCustomerNmbr nvarchar(10)

declare the cursor

DECLARE Customer CURSOR FOR

SELECT iRowId,

vchCustomerNmbr,

vchCustomerName

FROM CustomerTable

OPEN Customer

FETCH Customer INTO @iRowId,


@vchCustomerNmbr,
@vchCustomerName

start the main processing loop.

WHILE @@Fetch_Status = 0

BEGIN

This is where you perform your detailed row-by-row


processing.

Get the next row.

FETCH Customer INTO @iRowId,


@vchCustomerNmbr,
@vchCustomerName

END

CLOSE Customer

DEALLOCATE Customer

RETURN

BY DAVIDVANDESOMPELE

As you can see, this is a very straight-forward cursor procedure that loops through a table called
CustomerTable and retrieves iRowId, vchCustomerNmbr and vchCustomerName for every
row. Now we will examine a non-cursor version that does the exact same thing:

if exists (select * from sysobjects where name = NprcLoopExample)

drop procedure prcLoopExample

go

CREATE PROCEDURE prcLoopExample

AS

/*

** Non-cursor method to cycle through the Customer table and get Customer Info for each
iRowId.

**

** Revision History:

**

** Date Name Description Project


**

** 08/12/03 DVDS Create

**

*/

SET NOCOUNT ON

declare all variables!

DECLARE @iReturnCode int,

@iNextRowId int,

@iCurrentRowId int,

@iLoopControl int,

@vchCustomerName nvarchar(255),

@vchCustomerNmbr nvarchar(10)

@chProductNumber nchar(30)

Initialize variables!

SELECT @iLoopControl = 1

SELECT @iNextRowId = MIN(iRowId)

FROM CustomerTable

Make sure the table has data.

IF ISNULL(@iNextRowId,0) = 0

BEGIN

SELECT No data in found in table!

RETURN
END

Retrieve the first row

SELECT @iCurrentRowId = iRowId,

@vchCustomerNmbr = vchCustomerNmbr,

@vchCustomerName = vchCustomerName

FROM CustomerTable

WHERE iRowId = @iNextRowId

start the main processing loop.

WHILE @iLoopControl = 1

BEGIN

This is where you perform your detailed row-by-row

processing.

Reset looping variables.

SELECT @iNextRowId = NULL

get the next iRowId

SELECT @iNextRowId = MIN(iRowId)

FROM CustomerTable

WHERE iRowId > @iCurrentRowId

did we get a valid next row id?

IF ISNULL(@iNextRowId,0) = 0

BEGIN

BREAK

END
get the next row.

SELECT @iCurrentRowId = iRowId,

@vchCustomerNmbr = vchCustomerNmbr,

@vchCustomerName = vchCustomerName

FROM CustomerTable

WHERE iRowId = @iNextRowId

END

RETURN

There are several things to note about the above procedure.

For performance reasons, you will generally want to use a column like iRowId as your basis
for looping and row retrieval. It should be an auto-incrementing integer data type, along with
being the primary key column with a clustered index.

There may be times in which the column containing the primary key and/or clustered index is not
the appropriate choice for looping and row retrieval. For example, the primary key and/or
clustered index may have already been built on a column using uniqueindentifier as the data
type. In such a case, you can usually add an auto-increment integer data column to the table and
build a unique index or constraint on it.

The MIN function is used in conjunction with greater than > to retrieve the next available
iRowId. You could also use the MAX function in conjunction with less than < to achieve the
same result:

SELECT @iNextRowId = MAX(iRowId)

FROM CustomerTable

WHERE iRowId < @iCurrentRowId

Be sure to reset your looping variable(s) to NULL before retrieving the next @iNextRowId
value. This is critical because the SELECT statement used to get the next iRowId will not set the
@iNextRowId variable to NULL when it reaches the end of the table. Instead, it will fail to
return any new values and @iNextRowId will keep the last valid, non-NULL, value it received,
throwing your procedure into an endless loop. This brings us to the next point, exiting the loop.
When @iNextRowId is NULL, meaning the loop has reached the end of the table, you can use
the BREAK command to exit the WHILE loop. There are other ways of exiting from a WHILE
loop, but the BREAK command is sufficient for this example.

You will notice that in both procedures I have included the comments listed below in order to
illustrate the area in which you would perform your detailed, row-level processing.

This is where you perform your detailed row-by-row

processing.

Quite obviously, your row level processing will vary greatly, depending upon what you need to
accomplish. This variance will have the most profound impact on performance.

For example, suppose you have a more complex task which requires a nested loop. This is
equivalent to using nested cursors; the inner cursor, being dependent upon values retrieved from
the outer one, is declared, opened, closed and deallocated for every row in the outer cursor.
(Please reference the DECLARE CURSOR section in SQL Server Books Online for an example
of this.) In such a case, you will achieve much better performance by using the non-cursor
looping method because SQL is not burdened by the cursor activity

Here is an example procedure with a nested loop and no cursors:

if exists (select * from sysobjects where name = NprcNestedLoopExample)

drop procedure prcNestedLoopExample

go

CREATE PROCEDURE prcNestedLoopExample

AS

/*

** Non-cursor method to cycle through the Customer table ** and get Customer Name for each
iCustId. Get all
** products for each iCustid.

**

** Revision History:

**

** Date Name Description Project


**

** 08/12/03 DVDS Create

**

*/

SET NOCOUNT ON

declare all variables!

DECLARE @iReturnCode int,

@iNextCustRowId int,

@iCurrentCustRowId int,

@iCustLoopControl int,

@iNextProdRowId int,

@iCurrentProdRowId int,

@vchCustomerName nvarchar(255),

@chProductNumber nchar(30),

@vchProductName nvarchar(255)

Initialize variables!

SELECT @iCustLoopControl = 1

SELECT @iNextCustRowId = MIN(iCustId)

FROM Customer

Make sure the table has data.

IF ISNULL(@iNextCustRowId,0) = 0

BEGIN
SELECT No data in found in table!

RETURN

END

Retrieve the first row

SELECT @iCurrentCustRowId = iCustId,

@vchCustomerName = vchCustomerName

FROM Customer

WHERE iCustId = @iNextCustRowId

Start the main processing loop.

WHILE @iCustLoopControl = 1

BEGIN

Begin the nested(inner) loop.

Get the first product id for current customer.

SELECT @iNextProdRowId = MIN(iProductId)

FROM CustomerProduct

WHERE iCustId = @iCurrentCustRowId

Make sure the product table has data for


current customer.

IF ISNULL(@iNextProdRowId,0) = 0

BEGIN

SELECT No products found for this customer.

END

ELSE
BEGIN

retrieve the first full product row for


current customer.

SELECT @iCurrentProdRowId = iProductId,

@chProductNumber = chProductNumber,

@vchProductName = vchProductName

FROM CustomerProduct

WHERE iProductId = @iNextProdRowId

END

WHILE ISNULL(@iNextProdRowId,0) <> 0

BEGIN

Do the inner loop row-level processing here.

Reset the product next row id.

SELECT @iNextProdRowId = NULL

Get the next Product id for the current customer

SELECT @iNextProdRowId = MIN(iProductId)

FROM CustomerProduct

WHERE iCustId = @iCurrentCustRowId

AND iProductId > @iCurrentProdRowId

Get the next full product row for current customer.

SELECT @iCurrentProdRowId = iProductId,

@chProductNumber = chProductNumber,

@vchProductName = vchProductName
FROM CustomerProduct

WHERE iProductId = @iNextProdRowId

END

Reset inner loop variables.

SELECT @chProductNumber = NULL

SELECT @vchProductName = NULL

SELECT @iCurrentProdRowId = NULL

Reset outer looping variables.

SELECT @iNextCustRowId = NULL

Get the next iRowId.

SELECT @iNextCustRowId = MIN(iCustId)

FROM Customer

WHERE iCustId > @iCurrentCustRowId

Did we get a valid next row id?

IF ISNULL(@iNextCustRowId,0) = 0

BEGIN

BREAK

END

Get the next row.

SELECT @iCurrentCustRowId = iCustId,

@vchCustomerName = vchCustomerName

FROM Customer
WHERE iCustId = @iNextCustRowId

END

RETURN

In the above example we are looping through a customer table and, for each customer id, we are
then looping through a customer product table, retrieving all existing product records for that
customer. Notice that a different technique is used to exit from the inner loop. Instead of using a
BREAK statement, the WHILE loop depends directly on the value of @iNextProdRowId. When
it becomes NULL, having no value, the WHILE loop ends.

Conclusion

SQL Cursors are very useful and powerful because they offer a high degree of row-level data
manipulation, but this power comes at a price: negative performance. In this article I have
demonstrated an alternative that offers much of the cursors flexibility, but without the negative
impact to performance. I have used this alternative looping method several times in my
professional career to the benefit of cutting many hours of processing time on production SQL
Servers.

In both of these cases the command


SET SHOWPLAN ON
is your greatest ally!

I also like to run


SET NOEXEC ON
so that the server doesnt execute the query. Usefull when you want to benchmark UPDATE or
DELETE commands and not accidentally change any data!

(Remember to run SET NOEXEC ON last because if you run it first the SET SHOWPLAN
ON statement will, of course, not be run!)
1> SET SHOWPLAN ON

2> SET NOEXEC ON

3> GO

1> SELECT *

2> FROM post,


3> users

4> WHERE post.userid = users.userid

5> GO

QUERY PLAN FOR STATEMENT 1 (at line 1).

STEP 1

The type of query is SELECT.

FROM TABLE

users

Nested iteration.

Table Scan.

Forward scan.

Positioning at start of table.

Using I/O Size 2 Kbytes for data pages.

With LRU Buffer Replacement Strategy for data pages.

FROM TABLE

post

Nested iteration.

Table Scan.
Forward scan.

Positioning at start of table.

Using I/O Size 2 Kbytes for data pages.

With LRU Buffer Replacement Strategy for data pages.

As you can see from the output we have a table scan on BOTH tables! YIKES! This will cause
some problems as your tables start to fill up with information.

To fix this problem, create an index on users.userid like this:

1> CREATE INDEX userid ON users( userid )

2> GO

1> SELECT *

2> FROM post,

3> users

4> WHERE post.userid = users.userid

5> GO

QUERY PLAN FOR STATEMENT 1 (at line 1).

STEP 1

The type of query is SELECT.

FROM TABLE

post

Nested iteration.
Table Scan.

Forward scan.

Positioning at start of table.

Using I/O Size 2 Kbytes for data pages.

With LRU Buffer Replacement Strategy for data pages.

FROM TABLE

users

Nested iteration.

Index : userid

Forward scan.

Positioning by key.

Keys are:

userid ASC

Using I/O Size 2 Kbytes for data pages.

With LRU Buffer Replacement Strategy for data pages.

As you can see, the users table is now using the index your created. The reason why post is a
table scan is because you are selecting all rows so an index wont help you at all. A more
complex WHERE clause which uses more columns from post would require an index to avoid
the table scan.

To turn off NOEXEC And SHOWPLAN simply reverse the first command:

1> SET NOEXEC OFF

2> SET SHOWPLAN OFF

3> GO
QUERY PLAN FOR STATEMENT 1 (at line 1).

STEP 1

The type of query is SET OPTION OFF.

QUERY PLAN FOR STATEMENT 2 (at line 2).

STEP 1

The type of query is SET OPTION OFF.

1>

To enable stored procedure showplans:

Code:

DBCC TRACEON( 3604, 302 )

SET SHOWPLAN ON

SET FMTONLY ON

GO

EXEC sp_something

GO
To check what exactly is executed at the server level when frontend user kicks of a report or any
application module, use:

dbcc traceon(11201,11202,11203,11204,11205,11206)

It produces huge output in errorlog. Make sure to turn it off when the job is done.

How to separate data and log segment

1. use disk init to create the new log device for your database

2. dump tran with truncate_only- to make sure we clear the log

3. use sp_logdevice to move the log to the new device


sp_logdevice ,
this will change the sgmaps of 7 to 3 and move the log to the new device which has a
segmap of 4

4. dump tran with truncate_only- to make sure we clear any log that might remain on the
data device

5. use sp_helplog to make sure that the log starts on the log device

You might also like