Professional Documents
Culture Documents
2.
How the SP will be executed? (From Backend, from job, from FE etc) What are Input/Output Parameters and their data types? What is the result set of the SP? What Error checks and Validations are needed? Check for the objects required for writing SP. Check for the performance related inputs. Pseudo code may be written.
3.
Stored procedures are different from functions in that they do not return values in place of their names and they cannot be used directly in an expression. Functions can be used in a select statement; SP cant be used in a select statement. Functions can return tables, Stored Procedures cant. Functions cant change server environment or OS environment, while a sp can.
Primary Key constraint Foreign Key constraint Unique constraint Null or Not Null constraint Check constraint
4.
What is the difference between Primary Key and Unique Key? Primary key does not allow Null value. Unique key allows a single null value. When a primary key is defined by default a clustered index is created. When a unique key is defined by default a non clustered index is created.
Multiple Unique Key constraints can be defined on a Table. Only one Primary Key can be defined on a Table
5.
Why to use Clustered Indexes on a Table? A clustered index determines the physical order of data in a table. A clustered index is particularly efficient on columns that are often searched for ranges of values Clustered indexes are also efficient for finding a specific row when the indexed value is unique.
6.
When to use Clustered Indexes on a Table? Consider using a clustered index for: Columns that contain a large number of distinct values. Queries that return a range of values using operators such as BETWEEN, >, >=, <, and <=. Columns that are accessed sequentially. Queries that return large result sets. Columns that are frequently accessed by queries involving join or GROUP BY clauses; typically these are foreign key columns. An index on the column(s) specified in the ORDER BY or GROUP BY clause eliminates the need for SQL Server to sort the data because the rows are already sorted. This improves query performance. OLTP-type applications where very fast single row lookup is required, typically by means of the primary key. Create a clustered index on the primary key. When not to use Clustered Indexes on a Table? Clustered indexes are not a good choice for: Columns that undergo frequent changes. This results in the entire row moving (because SQL Server must keep the data values of a row in physical order). This is an important consideration in high-volume transaction processing systems where data tends to be volatile. Wide keys. The key values from the clustered index are used by all non clustered indexes as lookup keys and therefore are stored in each non clustered index leaf entry. Putting constraints increases correctness of data and retrieving also fast. Can we use Check Constraints on all columns? Constraints define rules regarding the values allowed in columns and are the standard mechanism for enforcing integrity. Using constraints is preferred to using triggers, rules, and defaults. The query optimizer also uses constraint definitions to build high-performance query execution plans. A CHECK constraint can validate a column value only against a logical expression or another column in the same table. If the application requires that a column value be validated against a column in another table check constraint may not be useful
7.
8.
Constraints can communicate about errors only through standardized system error messages. If the application requires (or can benefit from) customized messages constraints may not be useful. We may not use check constraints on all columns.
9.
Distributed queries are queries that access data from multiple heterogeneous data sources, which can be stored on either the same or different computers. Distributed queries provide SQL Server users with access to: i. Distributed data stored in multiple instances of SQL Server. ii. Heterogeneous data stored in various relational and non-relational data sources accessed using an OLE DB provider.
10. How do you write a join query which uses one table from DB1 and other from DB2, both
DB1 and DB2 are in the same server? SELECT A.col1, A.col2, B.clo1, B.col2 FROM DB1.dbo.t1 A INNER JOIN DB2.dbo.t2 B ON (A.col1 = B.col1)
12. I have text box in front end GUI which accepts unlimited characters. Data entered in that text box is going to the database. How do you handle it? We have to use Text or Ntext data types of SQL Server to store the information. 13. Where Stored Procedure is stored in Database? Procedure name is stored in sysobjects system table. Text of the procedure is stored in syscomments system table.
14. I have CountryId and Country Name columns in the table. For me, United States and
United Kingdom are most important countries. I want to get List of Countries in which US and UK in top order after that remaining all countries in alphabetical order. How can you achieve it? SELECT CountryID, CountryName FROM dbo.Country (NOLOCK) ORDER BY (CASE WHEN (CountryName = 'United States') THEN CHAR(0) WHEN (CountryName = 'United Kingdom') THEN CHAR(1) ELSE CountryName END)
15. What are triggers? SQL Server2000 triggers are a special class of stored procedure defined to execute automatically when an UPDATE, INSERT, or DELETE statement is issued against a table or view. Triggers are powerful tools that can be used to enforce business rules automatically when data is modified. Triggers can extend the integrity checking logic of SQL Server constraints, defaults, and rules. Tables can have multiple triggers Triggers contain Transact-SQL statements, much the same as stored procedures. Triggers, like stored procedures, return the result set generated by any SELECT statements in the trigger. Including SELECT statements in triggers, except statements that only fill parameters, is not recommended There are 2 types of triggers in SQL Server; After Triggers and Instead Of Triggers. Explain AFTER Triggers. The trigger executes after the statement that triggered it completes. If the statement fails with an error, such as a constraint violation or syntax error, the trigger is not executed. AFTER triggers cannot be specified for views, they can only be specified for tables. You can specify multiple AFTER triggers for each triggering action (INSERT, UPDATE, or DELETE). If you have multiple AFTER triggers for a table, you can use sp_settriggerorder to define which AFTER trigger fires first and which fires last. All other AFTER triggers besides the first and last fire in an undefined order which you cannot control. AFTER is the default in SQL Server 2000. You could not specify AFTER or INSTEAD OF in SQL Server version 7.0 or earlier, all triggers in those versions operated as AFTER triggers.
16.
18.
INSTEAD OF triggers also let you specify actions that allow views, which would normally not support updates, to be updatable. What are Cursors, How do you use them, and when do you use them? A cursor is a mechanism you can use to fetch rows one at a time. Transact-SQL cursors are used mainly in stored procedures, triggers, and Transact-SQL scripts in which they make the contents of a result set available to other Transact-SQL statements. When writing code for a transaction where the result set includes several rows of data, you may declare and use a cursor. For example, if you write code that includes a SELECT statement or stored procedure that returns multiple rows, you must declare a cursor and associate it with the SELECT statement. Then, by using the FETCH statement, you can retrieve one row at a time from the result set. The typical process for using a Transact-SQL cursor in a stored procedure or trigger is: Declare Transact-SQL variables to contain the data returned by the cursor. Declare one variable for each result set column. Declare the variables to be large enough to hold the values returned by the column and with a data type that can be implicitly converted from the data type of the column. Associate a Transact-SQL cursor with a SELECT statement using the DECLARE CURSOR statement. The DECLARE CURSOR statement also defines the characteristics of the cursor, such as the cursor name and whether the cursor is read-only or forward-only. Use the OPEN statement to execute the SELECT statement and populate the cursor. Use the FETCH INTO statement to fetch individual rows and have the data for each column moved into a specified variable. Other Transact-SQL statements can then reference those variables to access the fetched data values. Transact-SQL cursors do not support fetching blocks of rows.
When you are finished with the cursor, use the CLOSE statement. Closing a cursor frees some resources, such as the cursor's result set and its locks on the current row, but the cursor structure is still available for processing if you reissue an OPEN statement. Because the cursor is still present, you cannot reuse the cursor name at this point. The DEALLOCATE statement completely frees all resources allocated to the cursor, including the cursor name. After a cursor is de-allocated, you must issue a DECLARE statement to rebuild the cursor. Disadvantages of cursors Each time a row is fetched from the cursor, it results in a network roundtrip; where as a normal SELECT query makes only one roundtrip, however large the result set is. Cursors are also costly because they require more resources and temporary storage (results in more IO operations). Further, there are restrictions on the SELECT statements that can be used with some types of cursors.
The performance benefits of indexes, however, do come with a cost. Tables with indexes require more storage space in the database. Also, commands that insert, update, or delete data can take longer and require more processing time to maintain the indexes. When you design and create indexes, you should ensure that the performance benefits outweigh the extra cost in storage space and processing resources.
20. What is the difference between Clustered index and non-clustered index? A clustered index reorders the way records in the table are physically stored. Table can have only one clustered index. The leaf nodes of a clustered index contain the data pages. In non-clustered index the logical order of the index may not match the physical stored order of the rows on disk. The leaf node of a non-clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows. Table can have 249 non-clustered indices.
22. Explain Isolation Levels in Transactions. The isolation level is the degree to which one transaction must be isolated from other transactions. The level at which a transaction is prepared to accept inconsistent data is termed the isolation level. A lower isolation level increases concurrency, but at the expense of data correctness. Conversely, a higher isolation level ensures that data is correct, but can affect concurrency negatively. The isolation level required by an application determines the locking behavior SQL Server uses. Appropriate isolation level can be set for an entire SQL Server session with the SET TRANSACTION ISOLATION LEVEL statement. SET TRANSACTION ISOLATION LEVEL { READ COMMITTED | READ UNCOMMITTED | REPEATABLE READ | SERIALIZABLE } DBCC USEROPTIONS command can be used to determine the Transaction Isolation Level currently set. SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
GO DBCC USEROPTIONS GO
READ UNCOMMITTED When it's used, SQL Server does not issue shared locks while reading data. So, uncommitted transaction can be read that might get rolled back later. This isolation level is also called dirty read. This is the lowest isolation level. It ensures only that a physically corrupt data will not be read. READ COMMITTED This is the default isolation level in SQL Server. When it's used, SQL Server will use shared locks while reading data. It ensures that a physically corrupt data will not be read and will never read data that another application has changed and not yet committed, but it not ensures that the data will not be changed before the end of the transaction. REPEATABLE READ
When it's used, then dirty reads and non-repeatable reads cannot occur. It means that locks will be placed on all data that is used in a query, and another transaction cannot update the data.
SERIALIZABLE This is most restrictive isolation level. When it's used, then phantom values cannot occur. It prevents other users from updating or inserting rows into the data set until the transaction is complete. Non-repeatable read
When a transaction reads the same row more than one time, and between the two (or more) reads, a separate transaction modifies that row.
Because the row was modified between reads within the same transaction, each read produces different values, which introduces inconsistency. Phantom Reads Phantom behavior occurs when a transaction attempts to select a row that does not exist and a second transaction inserts the row before the first transaction finishes. If the row is inserted, the row appears as a phantom to the first transaction, inconsistently appearing and disappearing.
WHERE B.salary > A.salary) Or SELECT TOP 1 EmployeeNumber, Salary FROM (SELECT TOP N EmployeeNumber, Salary FROM EmployeeSalary ORDER BY Salary DESC) EmpSal ORDER BY Salary ASC 24. Write a query to delete duplicate records. If there is no unique key at all and we have to delete duplicates. Create a temporary unique column then delete as shown below. CREATE TABLE Employee ( EmpID int ) GO INSERT INTO Employee(EmpID) SELECT 1 INSERT INTO Employee(EmpID) SELECT 1 INSERT INTO Employee(EmpID) SELECT 1 INSERT INTO Employee(EmpID) SELECT 1 INSERT INTO Employee(EmpID) SELECT 2 INSERT INTO Employee(EmpID) SELECT 2 INSERT INTO Employee(EmpID) SELECT 3 INSERT INTO Employee(EmpID) SELECT 3 INSERT INTO Employee(EmpID) SELECT 3 GO ALTER TABLE Employee ADD tempRowKey int identity(1,1) DELETE A FROM Employee A WHERE tempRowKey not in (SELECT min(tempRowKey) FROM Employee B WHERE A.EmpID = B.EmpID) ALTER TABLE Employee DROP COLUMN tempRowKey SELECT * FROM Employee
Microsoft SQL Server allows statistical information regarding the distribution of values in a column to be created. This statistical information can be used by the query processor to determine the optimal strategy for evaluating a query. When an index is created, SQL Server automatically stores statistical information regarding the distribution of values in the indexed column(s). The query optimizer in SQL Server uses these statistics to estimate the cost of using the index for a query. All indexes have distribution statistics that describe the selectivity and distribution of the key values in the index. Selectivity is a property that relates to how many rows are typically identified by a key value. A unique key has high selectivity; a key value found in 1,000 rows has poor selectivity. The selectivity and distribution statistics are used by SQL Server to optimize its navigation through tables and indexed views when processing Transact-SQL statements.
The distribution statistics are used to estimate how efficient an index would be in retrieving data associated with a key value or range specified in the query.
30. What all joins supported by SQL server? Inner Join Left Outer Join Right Outer Join Full Outer Join Cross Join (Join with no where clause) 31. What is difference between LEFT OUTER JOIN and RIGHT OUTER JOIN? The result set of a left outer join includes all the rows from the left table specified in the LEFT OUTER clause, not just the ones in which the joined columns match. When a row in the left table has no matching rows in the right table, the associated result set row contains null values for all select list columns coming from the right table. A right outer join is the reverse of a left outer join. All rows from the right table are returned. Null values are returned for the left table any time a right table row has no matching row in the left table.
A lock is an object used to indicate that a user/process has some dependency on a resource. Placing a lock on a resource is called locking. Locking is used to ensure transactional integrity and database consistency. Locking prevents users from reading data being changed by other users, and prevents multiple users from changing the same data at the same time. If locking is not used, data within the database may become logically incorrect, and queries executed against that data may produce unexpected results. SQL Server locks are applied at various levels of granularity in the database. Locks can be acquired on rows, pages, keys, ranges of keys, indexes, tables, or databases. SQL Server dynamically determines the appropriate level at which to place locks for each Transact-SQL statement. The level at which locks are acquired can vary for different objects referenced by the same query.
34. What is shared lock, when it is used? Shared locks allow concurrent transactions to read a resource. No other transactions can modify the data while shared locks exist on the resource. Shared locks on a resource are released as soon as the data has been read, unless the transaction isolation level is set to repeatable read or higher, or a locking hint is used to retain the shared locks for the duration of the transaction. Used for operations that do not change or update data (read-only operations), such as a SELECT statement. 35. What are Table Hints? A table hint specifies a table scan, one or more indexes to be used by the query optimizer, or a locking method to be used by the query optimizer with this table and for this SELECT.
Following table hints can be specified in the FROM clause. INDEX (index_val [ ,...n ]) FASTFIRSTROW HOLDLOCK NOLOCK PAGLOCK READCOMMITTED READPAST READUNCOMMITTED REPEATABLEREAD ROWLOCK SERIALIZABLE TABLOCK TABLOCKX UPDLOCK XLOCK
36. Can you write SELECT statement without FROM clause? Yes, for assigning values to variables, for evaluating expressions etc. 37. What all indexes SQL server provide? Clustered Non Clustered 38. One Non-Clustered index is based on Clustered index, what will happen if I drop clustered index, will it re-create new non-clustered index or will drop it?
Example: Using sp_executesql procedure DECLARE @qry NVARCHAR(1000) DECLARE @ParmDefinition NVARCHAR(500) SELECT @qry = N'SELECT * FROM Customers WHERE CustomerID = @CustomerID' SELECT @ParmDefinition = N'@CustomerID nchar(5)' EXECUTE sp_executesql @qry, @ParmDefinition, @CustomerID = 'ALFKI' DECLARE @qry NVARCHAR(1000) Using Execute statement SELECT @qry = N'SELECT * FROM Customers WHERE CustomerID = ' SELECT @qry = @qry + '''ALFKI''' EXECUTE (@qry) 40. What is the difference between EXEC and sp_executesql stored procedure? sp_executesql can be used instead of stored procedures to execute a TransactSQL statement a number of times when the change in parameter values to the statement is the only variation.
Because the Transact-SQL statement itself remains constant and only the parameter values change, SQL Server query optimizer is likely to reuse the execution plan it generates for the first execution.
41. What is the output of following code? IF '' = 0 PRINT 'YES' ELSE PRINT 'NO' 'YES' This due to implicit conversion of character data to integer while comparing with integer. The blank string will be converted as zero implicitly.
42. Suppose a stored procedure is having one input parameter varchar data type with 8000 as
size. User enters extreme value, beyond 8K characters, what will happen? Only 8000 character will be taken in to the parameter, all the characters after that will be ignored.
To handle this, the input parameter should be defined with text data type instead of varchar.
43. NOLOCK is used for what purpose? Nolock does not issue shared locks and do not honor exclusive locks. When this option is in effect, it is possible to read an uncommitted transaction or a set of pages that are rolled back in the middle of a read. Dirty reads are possible. Only applies to the SELECT statement. 44. How do you do error handling in stored procedure? Using @@Error the error number of last transact SQL statement executed can be found. If the statement is executed successfully the value is set to 0. If any error occurred then @@ERROR returns the number of the error message. Because @@ERROR is cleared and reset on each statement executed, check it immediately following the statement validated, or save it to a local variable that can be checked later. Example:
CREATE PROCEDURE add_author @au_id varchar(11), @au_lname varchar(40), @au_fname varchar(20), @phone char(12), @address varchar(40) = NULL, @city varchar(20) = NULL, @state char(2) = NULL, @zip char(5) = NULL, @contract bit = NULL AS -- Execute the INSERT statement. INSERT INTO authors (au_id, au_lname, au_fname, phone, address, city, state, zip, contract) values (@au_id,@au_lname,@au_fname,@phone,@address, @city,@state,@zip,@contract)
-- Test the error value. IF @@ERROR <> 0 BEGIN -- Return 99 to the calling program to indicate failure. PRINT "An error occurred loading the new author information" RETURN(99) END ELSE BEGIN -- Return 0 to the calling program to indicate success. PRINT "The new author information has been loaded" RETURN(0) END
45. Suppose you want capture number of records affected for data manipulation statement as well as you want to trap error also if any how you will do that? Both error and Rowcount should be taken in a single select or set statement as shown below. Declare @error_number int Declare @affected_rows int Update authors Set au_lname = Meduri Where au_lname = Meduru Set @error_number = @@ERROR, @ affected_rows = @@ROWCOUNT
46. What needs to be done so that nobody should able to drop a table?
DROP TABLE permissions default to the table owner, and are not transferable. However, members of the sysadmin fixed server role or the db_owner and db_dlladmin fixed database roles can drop any object by specifying the owner in the DROP TABLE statement. If even the above should not be able to delete a table, make the database as read only EXEC sp_dboption 'databse_name', 'read only', 'TRUE' Remove all the users accounts from db_ddladmin role. Even this cant stop members of sysadmin fixes server role to drop the table.
47. What is Transaction, what all properties are required for a transaction? A transaction is a sequence of operations performed as a single logical unit of work. A logical unit of work must exhibit four properties, called the ACID (Atomicity, Consistency, Isolation, and Durability) properties, to qualify as a transaction: Atomicity: A transaction must be an atomic unit of work; either all of its data modifications are performed or none of them is performed. Consistency: When completed, a transaction must leave all data in a consistent state. In a relational database, all rules must be applied to the transaction's modifications to maintain all data integrity. All internal data structures, such as Btree indexes or doubly-linked lists, must be correct at the end of the transaction. Isolation: Modifications made by concurrent transactions must be isolated from the modifications made by any other concurrent transactions. A transaction either sees data in the state it was in before another concurrent transaction modified it, or it sees the data after the second transaction has completed, but it does not see an intermediate state. This is referred to as serializability because it results in the ability to reload the starting data and replay a series of transactions to end up with the data in the same state it was in after the original transactions were performed. Durability: After a transaction has completed, its effects are permanently in place in the system. The modifications persist even in the event of a system failure.
The permissions validation stage controls the activities the user is allowed to perform in the SQL Server database.
Linked Servers A linked server is a virtual server that has been defined to SQL Server with all the information needed to access an OLE DB data source. A linked server name is defined using the sp_addlinkedserver system stored procedure. The linked server definition contains all the information needed to locate the OLE DB data source. Local SQL Server logins are then mapped to logins in the linked server using sp_addlinkedsrvlogin. Remote tables can then be referenced by using the linked server name as the server name in a four-part name used as a table or view reference in a TransactSQL statement. The other three parts of the name reference an object in the linked server that is exposed as a row set. Remote tables can then be referenced by using the linked server as an input parameter to an OPENQUERY function. OPENQUERY sends the OLE DB provider a command to execute. The returned row set can then be used as a table or view reference in a Transact-SQL statement. Ad Hoc Names An ad hoc name is used for infrequent queries against OLE DB data sources that are not defined as a linked server name. In SQL Server 2000, the OPENROWSET and OPENDATASOURCE functions provide connection information for accessing data from OLE DB data sources. 51. What is OPENQUERY command and what all parameters you pass? OpenQuery Executes the specified pass-through query on the given linked server, which is an OLE DB data source OPENQUERY ( linked_server , 'query' ) Example: EXEC sp_addlinkedserver 'OracleSvr', 'Oracle 7.3', 'MSDAORA', 'ORCLDB' GO SELECT * FROM OPENQUERY(OracleSvr, 'SELECT name, id FROM joe.titles') GO 52. How many types of User Defined Function SQL server support? SQL Server 2000 supports three types of user-defined functions o Scalar functions o Inline table-valued functions o Multistatement table-valued functions 53. Explain Database Schema. A database schema is a collection of meta-data that describes the relations in a database. A schema can be simply described as the "layout" of a database or the blueprint that outlines the way data is organized into tables.
55. What are temporary tables? How many types of temporary tables can be defined in SQL
server? Temporary tables are similar to permanent tables, except temporary tables are stored in tempdb and are deleted automatically when no longer in use. The two types of temporary tables, local and global Local temporary tables have a single number sign (#) as the first character of their names; they are visible only to the current connection for the user; and they are deleted when the user disconnects from instances of SQL server. Global temporary tables have two number signs (##) as the first characters of their names; they are visible to any user after they are created; and they are deleted when all users referencing the table disconnect from SQL Server. 56. Given 2 tables: Emp Job EmpId JobID EmpName JobName Try any method for getting all employees having more than one job. You can add columns/create another table/drop columns Create a table Emp_Job with the columns EmpID and JobID SELECT E.EmpID, E.EmpName FROM Emp E INNER JOIN Emp_Job EJ ON (E.EmpID = EJ.EmpID) Group By E.EmpID, E.EmpName HAVING count(EJ.JobID) > 1
File group is a named collection of one or more files that forms a single unit of allocation or for administration of a database.
SELECT A.MName FROM Movie A INNER JOIN Movie B ON ( A.MName = B.MName AND A.RolePlayed = B.RolePlayed AND A.ActorName != B.ActorName )
65. Write a query to correct the salaries entries in the table A from B. Cant delete the table as
it may contains the more or less rows compare to B. Table A ( EmpID, EmpName, Salary) - Contains the wrong entries of salary Table B ( EmpID, EmpName, Salary) Contains the right entries of salary Update A SET A.Salary = B.Salary
66. What
FROM TABLE_A A INNER JOIN TABLE_B B ON (A.EmpID = B.EmpID) is normalization? Normalization can be termed as process of reorganizing data so that repeating groups can be eliminated, redundant data can be eliminated or minimized. Normalization usually involves dividing a database into two or more tables and defining relationships between the tables. The objective is to isolate data so that additions, deletions, and modifications of a field can be made in just one table and then propagated through the rest of the database via the defined relationships.
A relation is in 1NF if and only if all underlying domains contain atomic values only.
Following is not in 1NF Student Student No Student Name 1 2 Xxx Yyy 10, 20 10
Student Marks
Sub1Marks 10 10
Sub2Marks 20 null
68. Explain Second Normal form? A relation is in 2NF if it is in 1NF and every non-key attribute is fully dependent on each candidate key of the relation. Following is not in 2 NF Student Student Student Name No 1 Xxx 1 Xxx 2 YYY Following is in 2 NF Student Student Student Name No 1 Xxx 2 Yyy Subject Subject Code S1 S2 S3 Student_Marks
Subject Name S1 S2 S1 10 10 20
Marks
Student Code 1 1 2
Subject Code S1 S2 S1
Marks 10 10 20
Following is not in 3 NF Student Student Student Name No 1 Xxx 2 Yyy Following is in 3 NF Student Student Student Name No 1 Xxx 2 Yyy School School Code S1 S2 S3
School Code S1 S2
School Code S1 S2
70. What is de-normalization and when would you go for it? De-normalization is the reverse process of normalization.
It's the controlled introduction of redundancy in to the database design. It helps improve the query performance as the number of joins could be reduced.
71. How do you implement one-to-one, one-to-many and many-to-many relationships while designing tables? One-to-One relationship can be implemented as a single table and rarely as two tables with primary and foreign key relationships. One-to-Many relationships are implemented by splitting the data into two tables with primary key and foreign key relationships. Many-to-Many relationships are implemented using a junction table with the keys from both the tables forming the composite primary key of the junction table. 72. Define candidate key, alternate key, and composite key. A candidate key is one that can identify each row of a table uniquely. Generally a candidate key becomes the primary key of the table. If the table has more than one candidate key, one of them will become the primary key, and the rest are called alternate keys. A key formed by combining at least two or more columns is called composite key. 73. What is maximum size of row in a table? 8060 bytes
74. What type of data can be stored in bit data type? Bit data type is used to store Boolean information like 1 or 0 (true or false). Until SQL Server 6.5 bit data type could hold either a 1 or 0 and there was no support for NULL. But from SQL Server 7.0 onwards, bit data type can represent a third state, which is NULL 75. What are defaults? Is there a column to which a default can't be bound? Defaults specify what values are used in a column if you do not specify a value for the column when inserting a row.
A data value, option setting, collation, or name assigned automatically by the system if a user does not specify the value, setting, collation, or name. An action taken automatically at certain events if a user has not specified the action to take is called default. One way to apply default is to create a default definition using the DEFAULT keyword in CREATE TABLE to assign a constant expression as a default on a column. This is the preferred, standard method. It is also the more concise way to specify a default. Other way is to create a default object using the CREATE DEFAULT statement and bind it to columns using the sp_bindefault system stored procedure. DEFAULT definitions can be applied to any columns except those defined as timestamp, or those with the IDENTITY property.
76. What is lock escalation? Lock escalation is the process of converting a lot of low level locks (like row locks, page locks) into higher level locks (like table locks). Lock escalation is the process of converting many fine-grain locks into fewer coarse-grain locks, reducing system overhead.
For example, when a transaction requests rows from a table, SQL Server automatically acquires locks on those rows affected and places higher-level intent locks on the pages and table, or index, which contain those rows. When the number of locks held by the transaction exceeds its threshold, SQL Server attempts to change the intent lock on the table to a stronger lock (for example, intent exclusive (IX) would change to an exclusive (X) lock). After acquiring the stronger lock, all page and row level locks held by the transaction on the table are released, reducing lock overhead. SQL Server may choose to do both row and page locking for the same query, for example, placing page locks on the index (if enough contiguous keys in a nonclustered index node are selected to satisfy the query) and row locks on the data. This reduces the likelihood that lock escalation will be necessary. Lock escalation thresholds are determined dynamically by SQL Server and do not require configuration.
77. What is an extended stored procedure? Extended stored procedures are dynamic-link libraries (DLLs) that SQL Server can dynamically load and execute. Extended stored procedures allow you to create your own external routines in a programming language such as C. The extended stored procedures appear to users as normal stored procedures and are executed in the same way. Parameters can be passed to extended stored procedures, and they can return results and return status. Extended stored procedures run directly in the address space of SQL Server and are programmed using the SQL Server Open Data Services API. After an extended stored procedure has been written, members of the sysadmin fixed server role can register the extended stored procedure with SQL Server and then grant permission to other users to execute the procedure. Extended stored procedures can be added only to the master database. Example to add extended stored procedure
USE master EXEC sp_addextendedproc xp_hello, 'xp_hello.dll' 78. Can you instantiate a COM object by using T-SQL?
Yes, COM object can be instantiated from T-SQL by using sp_OACreate stored procedure.