Professional Documents
Culture Documents
licensing models:
*****************
there are 2 types of licensing models with the sql server.they are the process and
client access license(cal)
-the processor model is a descendant of the internet connector license and allows
you to have unlimited connections for each processor that is licensed.
-the cal model lets you license the client(not the server)and each connection.
-once we purchase the license then we can access any number of instances making it
the least expensive method of licensing for a small to medium environment
-when installing multiple instance of the sql server on the same machine in the
cal modelyou may have the license each instance seperately.
-if u are running enterprise edition you will not need additional server licenses
and we can install upto 16 instances per server and remain supported my microsoft.
-usually for a very big server enterprise edition in sql 2000 is used and
enterprise edition supports data mirroring in sql 2005
-usually sql 2005 is used with the enterprise edition, standard edition and the
work group edition.
software requirements:
-windows 2000 and windows 2003 server as os
-sql 2005 works better for enterprise edition with a windows 2003 server os
-by default sql server can support untill 3gb and if need to increase the space we
need to enable awe and boot.ini and increase the capacity to 32gb
hardware requirements:
-for sql 2000 we need a minimum free space of 180 mb for installing
-ram of 120 mb for 2000 and 512 mb for 2005
instance:
the term instance is typically used to describe a complete database environment,
including the rdbms software, table structure, stored procedures and other
functionality. it is most commonly used when administrators describe multiple
instances of the same database.
there are 2 types of instances
1)default instance
2)named instance
current activity gives about all the current logs and current blogs that are done
on the server
use master
go
select@@version it will be giving the connection to the master database
********************
creating a database:
********************
syntax:
use master
go
create database nag
on
(name='nag_data',filename='c:\sqldata\nag_data.mdf')
log on
(name='nag_log',filename='d:\sqllog\nag_log.ldf')
go
as soon as the database is created we will have to execute the stored procedure
exec sp_changeddbowner 'sa'
m-master database has 50 system tables out of which 19 also exists in user
databases.
in order to know wether we have created a database and if not we need to create a
database the syntax for this is
syntax:
use master
go
if notexists(select name form sysdatabases where name='nag')
create database nag
there are different types of datatypes which we will be using in creating the
tables
4)tcl(transactional control)
-commit
-rollback
tables:
-in order to drop a table we will use the syntax as
droptable<table name>
-insert into table <tablename> values(values,values...)
-select * into # <table name> from <table name > will create a temporary table(so
if # is present then its a temporary table)
-in order to drop all the user tables we use the syntax as
select 'drop table' +name from sysobjects where xtype='u'
-in order to change the column name we will use the following syntax
select <old column name> as <new column name> from <table name>
*********
security:
*********
1)authentication:
-for sql server there are 2 modes of authentication
--windows authentication and
--mixed mode authentiation
windows authentication:
-it has no id and password
-only windows clients can log in
-its sso(single sign on) it asks for the id and password only for the first time
we log in
-trusted connection
2)authorizations:
-roles and permissions
appplication roles:
this is a temporay role and in this we can create a new role in the database and
this roles gets deleted as soon as the application is the database ends
permissions:
permissions are clssified in to 2 types
-statement level permissions(ddl)
-object level permissions(dml)
*******
backup:
*******
1) full backup:
-its a complete backup
syntax:
backup database<database name>to disk='c:\sqlbackup\nag_db_20070411.bak'
with init,stats=10
2)differential backup:
-it is the backup of the changes that happen after the last full backup.
syntax:
backup database<database name>to disk='d:\sqlbackup\nag_diff_20070411.bak'
with init stats=10,differential
3)transactional backup:
-it is the back up of only the ldf files in the database
syntax:
backup database<database name>to disk='e:\sqlbackup\nag_db_20070411.trn'
with init,stats=10
*********
recovery:
*********
orphan user: an ophan user is an user in a database witha system id (sid) that
does not exists in the syslogin table in the master database (it just has the
login id but does not have the password)
why does an ldf file size grow and how can we control the growth ?
ldf file grows as the transactional log backups takes place and is because of the
uncommited transactions.we can handle this issue by committing the transaction we
can actually control the growth of the ldf files
restore:
-in order to restore in sql 2000 we will have to follow certain procedure
-we will have to give the following commands.
syntax:
restore verifyonly from disk='c:\sqlbackup\nag.bak'
-this is to verify wether the back file is valid or not
the command that is used to restore a database forceably that is still loading and
is in the suspect mode
sp_resetstatus<dbname>
dbcc dbrecovery<dbname>
******
index:
******
-indexes are user defined data structures which provide fast access to the data
when the data can be searched by value which is index key
-query optmizer determines the indexes
-indexes store the information using standard b-trees(balanced trees)
-b-trees are managed,balanced and hence finding any record requires about the same
amount of resources and also retrival speed is consistant.
-1 page data=8kb
1 extent=8pages =64kb
-if extent is used by single object its uniform extent and if extent is filled by
multiple objects its mixed extent.
-the number of levels in an index will vary depending on the number of rows and
the size of key column for the index.
-in every index the leaf level contains every key value in key sequence
clustered index:
-leaf level contains the data pages not just the index key,data itself is a part
of the clustered index.
-when the index is scanned to the leaf level its the actual data that is retrieved
but it is not pointed.hence the clustered is always preffered.
-there can be only one clustered index per table(as the data can be sorted out
about only one column)
-clustered index allows especially the fast access for queries requires for a
range of values because it arranges the data in the sequential order.
-book mark is clustered index key for the corresponding data row.
-no need to scan the entire talble because the data page is exactly identified.
-by default the primary key create a clustered index.
syntax:
create [u],[c][nc]index <index name>
on
<table name><column name>
with
[fill factor= ],[pad_index= ]
[ignore_dup_key],[drop_existing]
[statistics_norecompute],[sort_in_template]
--[u]--unique
--[c]--clustered
--[nc]--non clustered
--[fill factor=]--amount of the free space left in an index
--[pad_index]--amount of the free space left on an extent
--[ignore_dup_key]--ignore the duplicate key
--[drop_existing]--we will have to create one index in order to create another
index
--[sort_in_tempdb]--in this when the index gets corrupted then we will have to
rebuild the new index on the whole again,so we do this in tempdb and once the new
index is created then the corrupted index is dropped.
-each index has a row in sysindexes table with a column name indid value of
1(clustered)
-for the non cluseted index the values of the indid is 2 to 250 so on the whole we
will be having abt 249 non clustered index values.
-255 is for lob-large objects.
-in sysindexes table we have various columns.
-rowcnt-no of data rows in the table
-dpages-data pages(no of index pages accross which the data is spread about)
-in clustered index the dpages will show the actual data itself and where as in
non clustered index the dpages will show the no of indexes
-sp_space will give the toltal size used by a table
managing an index:
-when rows are added then they are automatically inserted in to the correct
position in to the table.
***********************
types of fragmentation:
***********************
1)internal:
-occurs when space is available with in the index pages
2)external:
-occurs when the logical order of the pages does not match with the physical order
detecting fragmentation:
command:
dbcc showcontig
syntax:
use <dbname>
go
dbcc showcontig
-index seek is used for the clustered index and index scan is used for the non
clustered index.
removing fragmentation:
method(1):
-rebuilding or recreating the index
-in this we give the commands like create index or drop index and we also give the
command dbcc dbreindex<index name>
-table will not be available while the index is rebuilt
method(2):
-reorgnise or repair the index
-we do the fragmentation of the fragmented index
-dbcc indexdefrag<index name>
-table is available while these operations are being performed.
-it does an inplace ordering again called as bubble sort.
-compacts the pages in the index depending upon the fill factor.
************************
sql server architecture:
************************
****************************
database and database files:
****************************
1)master database:
-system catalogs
-info about the disk space
-file allocations
-system wide configuration
-login accounts
-existance of another account
2)model database:
-its a template
3)temp db:
-it is a workspace used by the application and a system process
-its recreated but cannot be recovered
4)msdb:
-used by the sql server agent service
-the thread that performs auto shrink has spid 6 and by default shrinks at a 30
min interval.
database options:
1)state optons:
-alter database<db name>
-we can use the database as the single user or the multiple user or restricted
user
-offline/online,by default its online
-read only
2)cursor options:
-local/global cursor
-cursor is the variable that keeps on fetching values. cursor close on commit
-this closes the cursor so that the space is not occupied
3)recovery options:
-full,bulk logged and simple recovery options
4)auto options:
-auto shrink/auto statistics
5)sql options:
-ansy sql
-set commands
declare
@id int
begin
set@id=100
select * from emp where empid =@id
end
raid configurations:
1)raid 0
-data stripping
-no fault tolerance(1 read + 1 write)
-good speed
2)raid 1
-disk mirroring
-(1 read + 2 writes)
-it has 2 drives
-recommened for transactional log backups
3)raid 5
-ithas more than 2 drives
-it adds parity whenever it writes
-data stripping with parity
-good speed
-good fault tolerance
4)raid 10
-data mirroring
-faster data reads and writes as it does not need a manage parity
-second optimization for fault tolerance
**********
migration:
**********
1)inplace migration:
-upgrading the existing instance on the same hardware
(installing sql 2005 on top of an existing instance)
3)run the sql2005 upgrade advisor tool on the existing sql 2000 instance
-we can down load it from www.microsoft.com/sql/solutions/upgrade/default.mspx
4)address all the compatible issues and re run the tool again to ensure zero
compatible issues.
5)and when the hardware is available and the down time is decieded ,detach the
databases from the existing sql instance using sp_detach_dbprocedure
-here we will have to stop replication if any before detach.
6)move the detached database file or files and the log file or files in to the
destination location
7)attach the copied files to the new sql 2005 by using thecreate database
statement with the for attach or for attach_rebuild_log
-create database aircheck_dev
on
(filename = 'g:\mssql\data\aircheck_dev_data.mdf'),
(filename='f:\blobdata\aircheck2_dev.ndf'),
(filename =�e:\logs\aircheck_dev_log.ldf')
for attach;
go
10) transfer:
-transfer logins-generate the script for user/logins using sp_help_revlogin and
execute it on destination (d/n from http://support.microsoft.com/kb/246133 )
-be sure to disable the "enable pwd policy" options after logins are created
11)transfer all the old dts packages using ssis sts import tool
12)transfer all the jobs,db maintaenance plans using ssis tasks.while moving the
jobs,the databases on which jobs are scheduled must exist on the destination
server.
******
locks:
******
1)shared lock:
-acquires automatically when the data is read
-if we issue a select statement tht a shared lock is issued.
-a process can not acquire an exclusive lock on the data when it already has a
shared lock on it by other processes.
-when a shared lock is placed we can also place another shared lock
-to avoid shared locks on a table while accessing them then we will ahve to give
the option [no lock]
2)exclusive lock:
-acquires automatically on data when it is modified by an insert,update or delete
operation.
-only one process at a time can hold an exclusive lock on a particular data
resource.
-whenever any modifications are going on an exclusive lock is placed
-whenever an exclusive lock is placed we cn also place a shared lock but cannot
place another exclusive lock on the process.
-exclusive lcoks are held untill the end of transaction.
3)update lock:
-acquires during data modificationbut first needs to search the table to find the
resource that needs to be modified(intent-to-update)
-provides compatibility during reading the data.
-escalates the update lock to an exclusive lock for a modification.
s x 1x
s o x o
x o x x
1x o x x
-intent locks:
there are 3 types of intent locks
1)intent shared lock
2)intent exclusive lock
3)intent update lock
--six lock:
holds a shared lock on a resource and then later an exclusive lock is needed
key locks:
-tries to lock the actual index keys accessed while processing the query.
-select * form emp
where salary between 3000 and 5000
blocks:
-blocks occur when multiple spids waiting to access the same object or resource.
dead lock:
-dead lock occurs when 2 processes or spids are waiting for a resourse and are
neither processed can advance because the other prevents it from getting the
resource
-without intervension neither processes can never progress
-dead lock
-when a dead lock occurs one of the spid is automatically killed and is considered
as the victim of the dead lock and the error message is 1205
-in order to fix this dead lock we will have to give the command
dbcc traceon(1205,3605,-1)
go
dbcc tracestatus(-1)
go
(or)
-we will have to enable as a start up parameter of 1205 and 3605
-then we will have to trace the dead lock by trace on
isolation levels:
-read uncommited-(dirty read,can read anything)
-read commited-deafult
-repeatable read
-serializabl
-inner join--inner joins return all rows from multiple tables where the join
condition is met.
for example,
this sql statement would return all rows from the suppliers and orders tables
where there is a matching supplier_id value in both the suppliers and orders
tables.
-outer join--this type of join returns all rows from one table and only those
rows from a secondary table where the joined fields are equal (join condition is
met).
for example,
this sql statement would return all rows from the suppliers table and only those
rows from the orders table where the joined fields are equal.
the (+) after the orders.supplier_id field indicates that, if a supplier_id value
in the suppliers table does not exist in the orders table, all fields in the
orders table will display as <null> in the result set.
3)cursors/temp tables
performance tuning:
-hardware level
-sql configuration
-i/o subsystems
-application code
--sql configuration:
-a 32bit operating system detects only 3gb ram
-so in order to increase the size of the ram here we will have to enable awe in
boot.ini and then increase the size of ram.
command used for this is
sp_configure'awe enabled',1
reconfigure
-affinity mask will let us know about how many processors have to be run on the
sql server
-query governor value is set to avoid long running queries
******************
high availability:
******************
sql server has the advantages like the high availability and the disaster
recovery.
2)active-active:
-in this there are 2 servers
-in tis some databases are on the first server and the other databases are on the
second server.
-so in this there will be load balancing
-for this type we require 5 ip addresses.
-when there is a connection between the serversthey keep talking which is called
heartbeat monitor.
*******************
database mirroring:
*******************
database mirrioring---principle->mirror->witness
-database mirroring is fully supported in the enterprise edition and is safety
full mode in the standard edition.
-database mirroring is a dual write concept.
2)high safety:
-in writes data simulataneously in both (p) and in (m) and does not wait for the
confirmation.
3)high performance:
-when (p) writes and commits only then (m) starts writing
-its an asynchronous process
--in synchronous process both (p) and (m) will write and (p) will be waiting for
the confirmation from (m) writing then (p) commits the transaction.
************
replication:
************
�snapshot replication:
it acts in the manner its name implies. the publisher simply takes a snapshot of
the entire replicated database and shares it with the subscribers. of course, this
is a very time and resource-intensive process. for this reason, most
administrators don�t use snapshot replication on a recurring basis for databases
that change frequently. there are two scenarios where snapshot replication is
commonly used. first, it is used for databases that rarely change. second, it is
used to set a baseline to establish replication between systems while future
updates are propagated using transactional or merge replication.
�transactional replication:
it offers a more flexible solution for databases that change on a regular basis.
with transactional replication, the replication agent monitors the publisher for
changes to the database and transmits those changes to the subscribers. this
transmission can take place immediately or on a periodic basis.
�merge replication:
it allows the publisher and subscriber to independently make changes to the
database. both entities can work without an active network connection. when they
are reconnected, the merge replication agent checks for changes on both sets of
data and modifies each database accordingly. if changes conflict with each other,
it uses a predefined conflict resolution algorithm to determine the appropriate
data. merge replication is commonly used by laptop users and others who can not be
constantly connected to the publisher.
each one of these replication techniques serves a useful purpose and is well-
suited to particular database scenarios.
if you're working with sql server 2005, you'll need to choose your edition based
upon your replication needs. each edition has differing capabilities:
�express edition has extremely limited replication capabilities. it's able to act
as a replication client only.
�workgroup edition adds limited publishing capabilities. it's able to serve five
clients using transactional replication and up to 25 clients usin merge
replication. it can also act as a replication client.
�standard edition has full, unlimited replication capabilities with other sql
server databases.
�enterprise edition adds a powerful tool for those operating in mixed database
environments -- it's capable of replication with oracle databases
as you've undoubtedly recognized by this point, sql server's replication
capabilities offer database administrators a powerful tool for managing and
scaling databases in an enterprise environment.
**********************************************
how to start sql server in a single user mode:
**********************************************
-fail over is a concept where in when the principle server stops working then the
mirror server takes up the databases so this is called fail over.
************************************
how do we restore a master database?
************************************
-if changes have been made to master since a backup was created, those changes
are lost when the backup is restored. you must re-create those changes by
executing the statements that re-create the missing changes. for example, if any
sql server logins have been created since the backup was performed, the logins
are lost when master is restored. re-create the logins by using sql server
management studio or by using the original scripts with which the logins were
created.
-you can restore the master database only from a backup that is created on an
instance of sql server 2005.
note:any database users that were previously associated with lost logins are
orphaned, that is they cannot access the database. for more information, see
troubleshooting orphaned users.
-after you restore master, the instance of sql server is stopped automatically.
if you have to make additional repairs and want to prevent more than a single
connection to the server, restart the server in single-user mode. otherwise, the
server can be restarted regularly. if you decide to restart the server in
single-user mode, first stop all sql server services, except the server instance
itself, and stop all sql server utilities, such as sql server agent. by
stopping the services and utilities, you prevent them from trying to access the
server instance.
reconstructing changes that are made after the backup was created
if a user database was created after the restored backup of master, that user
database is inaccessible until one of the following occurs:
�the database is attached. we recommend this method.
attaching a database requires that all of the database files are available and
usable. we recommend specifying the log files, and also the data files,
instead of having the attach operation try to rebuild the log file or files.
for information about how to attach a database, see how to: attach a database
(sql server management studio) or create database (transact-sql).
�the database is restored from one or more backups.
restore the database only if its data files or transaction log files no longer
exist or are unusable.
attaching or restoring a database, re-creates the necessary system table
entries, and the database becomes available in the same state as before the
master database was restored.
if any objects, logins, or databases, have been deleted after master is backed up,
you must delete those objects, logins, and databases from master.
important: if any databases no longer exist but are referenced in a backup of
master that is restored, sql server may report errors when it starts, because it
can no longer find those databases. those databases should be dropped after the
backup is restored.
when master has been restored and any changes have been reapplied, back up master
immediately.
what if an ldf file size is growing continously how can we control this?
-ldf file size increases everytime when a trabsactional backup is taken and its a
uncommited transaction.so in order to control the growth of ldf file we will have
to commit the transaction.
migration :
i have done the migration....
i have created aan automation scripts in order to transfer the logins and u
as it is i think the automation is the best way...as gui gets hanged as it also
maintain migration on the other side and apart form that whne we start writing
scripts in te gui
had some problems while shifting the logins and users
schema :
beginning in sql server 2005, each object belongs to a database schema. a database
schema is a distinct namespace that is separate from a database user. you can
think of a schema as a container of objects. schemas can be created and altered in
a database, and users can be granted access to a schema. a schema can be owned by
any user, and schema ownership is transferable.