Monday, September 12, 2011

Understanding deadlocks

              A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. The following graph presents a high level view of a deadlock state where:
  • Task T1 has a lock on resource R1 (indicated by the arrow from R1 to T1) and has requested a lock on resource R2 (indicated by the arrow from T1 to R2).
  • Task T2 has a lock on resource R2 (indicated by the arrow from R2 to T2) and has requested a lock on resource R1 (indicated by the arrow from T2 to R1).
  • Because neither task can continue until a resource is available and neither resource can be released until a task continues, a deadlock state exists.
Diagram showing tasks in a deadlock stateThe SQL Server Database Engine automatically detects deadlock cycles within SQL Server. The Database Engine chooses one of the sessions as a deadlock victim and the current transaction is terminated with an error to break the deadlock.

Best practices to reduce dead lock situations:
1. keep transaction short
2. keep transactions in a single batch
3. access the least amount of data possible in the transaction
4. where appropriate, use a lower isolation level 






Saturday, September 10, 2011

Database Engine Tuning Advisor (DTA) - Basics



          The Database Engine Tuning Advisor (DTA) is the greatly enhanced replacement to the Index Tuning Wizard tool that shipped with previous versions of SQL Server. DTA plays an important role in an overall performance solution, letting you leverage the query optimizer to receive recommendations on indexes, indexed views, or partitions that could improve performance.

Open new session (ctrl + N)
Then you will get a new pane with two options.
1. General 2. Tuning options

In the first tab you need to enter values for few things like session name, workload (file/table), database name, and tables. You can save this tuning log to a particular file.

In the tuning tab, you will get several options such as Limit tuning time (stop at), Physical design structures to use in database, Partitioning strategy to employ, Physical design structures to keep in database.

PDS to use in database - Indexed views, Indexes, nonclustered indexes, and indexes and indexed views. (Objects to be used in the tuning strategy)

Partitioning strategy to employ – No partitioning, full partitioning, and Aligned partitioning

PDS to keep in database – do not keep any existing PDS, Keep all existing PDS, keep align partitioning, Keep indexes only, and Keep clustered indexes only. (Objects to be kept intact)

Once you start analysis you will another 3 options in the pane.

Progress

Recommendations – estimated improvement 0%, Partition recommendations, Index recommendations (Database name, object name, Recommendation (create/alter/drop), target of Recommendation, Details, size, definition (query to make changes))

Reports -  Tuning summary (Date, time, server, database(s), workload file, max tuning time, time taken, expected percentage of improvement, max space, used space, number of events tuned), Tuning reports

SQL Server profiler


          Microsoft SQL Server Profiler is a graphical user interface to SQL Trace for monitoring an instance of the SQL Server Database Engine or Analysis Services. You can capture and save data about each event to a file or table to analyze later. For example, you can monitor a production environment to see which stored procedures are affecting performance by executing too slowly.
                   Profiler allows you to monitor and record what is happening inside the database engine. This is accomplished by performing a trace, which is a record of data that has been captured about events. Traces are stored in a table, a trace log file, or both and they can be either shared or private.
                             The actions you will be monitoring, called events, are anything that happens to the database engine, such as a failed login or a completed query. These events are logically grouped into event classes. The following event categories are available:

Cursors                   Database       Errors and warning   Locks            Objects                  
Performance  Scans           Security Audit          Server          Sessions
Transactions   TSql              Stored Procedures    OLEDB          Broker
Full text         Deprecation   User Configurable     Progress Report
Using SQL Server Profiler, you can do the following:
  • Create a trace that is based on a reusable template
  • Watch the trace results as the trace runs
  • Store the trace results in a table
  • Start, stop, pause, and modify the trace results as necessary
  • Replay the trace results
          Use SQL Server Profiler to monitor only the events in which you are interested. If traces are becoming too large, you can filter them based on the information you want, so that only a subset of the event data is collected. Monitoring too many events adds overhead to the server and the monitoring process, and can cause the trace file or trace table to grow very large, especially when the monitoring process takes place over a long period of time.
Open new trace (ctrl + N)
Then that will ask for Server type, server name, and authentication details. After that you will get Trace properties windows having two tabs.
1. General 2. Events selection
In the general tab you need to give inputs to different options like trace name, trace provide name, trace provider type, version, Template, save to file – set max size, save to table, and enable trace stop time.
In the second tab you need to select events and columns. For tuning template, you will get only stored procedures and t-sql events with few columns like textdata, duration, SPID, DatabaseId, Databasename, object type, and login name.
Profiler traces templates – standard (default), blank, sp_counts, TSql, Tsql_duration, Tsql_grouped, Tsql_replay, Tsql_SPs, and tuning.
          SQL Server provides a special type of trace called a replay trace, which enables you to capture a workload that you can then replay on a test system. Profiler contains a multithreaded subsystem to handle the replay of a trace. And there are specific requirements that need to be met for the replay to succeed. You must use SQL Server authentication only and capture specific events as defined in the Replay template. The replay cannot handle Windows authentication because it does not have the ability to impersonate a Windows user.
         
          When Profiler replays a trace, it spawns multiple execution threads that use the same security context as the original execution. Threads are also synchronized to provide a realistic duplication of the workload. The start time, end time, and system process ID (SPID) columns enable Profiler to re-create the exact interleaving of queries that occurred on the original system.

Columns and events required for a replay trace
RPC Starting, audit logout, prepare sql, exec prepared sql, RPC output parameter, sql:batch starting, ExistingConnection, CursorOpen, audit login, cursor prepare, and cursor execute.

Event
          An event is an action generated within an instance of SQL Server Database Engine. Examples of these are:

  • Login connections, failures, and disconnections.
  • Transact-SQL SELECT, INSERT, UPDATE, and DELETE statements.
  • Remote procedure call (RPC) batch status.
  • The start or end of a stored procedure.
  • The start or end of statements within stored procedures.
  • The start or end of an SQL batch.
  • An error written to the SQL Server error log.
  • A lock acquired or released on a database object.
  • An opened cursor.
  • Security permission checks.


All of the data generated by an event is displayed in the trace in a single row. This row is intersected by data columns that describe the event in detail.

Event Class
An event class is a type of event that can be traced. The event class contains all of the data that can be reported by an event. Examples of event classes are the following:

  • SQL:BatchCompleted
  • Audit Login
  • Audit Logout
  • Lock:Acquired
  • Lock:Released


Template
          A template defines the default configuration for a trace. Specifically, it includes the event classes you want to monitor with SQL Server Profiler. For example, you can create a template that specifies the events, data columns, and filters to use. A template is not executed, but rather is saved as a file with a .tdf extension. Once saved, the template controls the trace data that is captured when a trace based on the template is launched.

Trace
          A trace captures data based on selected event classes, data columns, and filters. For example, you can create a trace to monitor exception errors. To do this, you select the Exception event class and the Error, State, and Severity data columns. Data from these three columns needs to be collected in order for the trace results to provide meaningful data. You can then run a trace, configured in such a manner, and collect data on any Exception events that occur in the server. Trace data can be saved, or used immediately for analysis. Traces can be replayed at a later date, although certain events, such as Exception events, are never replayed. You can also save the trace as a template to build similar traces in the future.

Table Variables


The syntax for creating table variables is quite similar to creating either regular or temporary tables.  The only differences involve a naming convention unique to variables in general, and the need to declare the table variable as you would any other local variable in Transact SQL:
As you can see the syntax bridges local variable declaration (DECLARE @variable_name variable_data_type) and table creation (column_name, data_type, nullability).  As with any other local variable in T-SQL, the table variable must be prefixed with an "@" sign.  Unlike temporary or regular table objects, table variables have certain clear limitations.
    * Table variables can not have Non-Clustered Indexes
    * Statistics can not be created against table variables
Similarities with temporary tables include:
    * Instantiated in tempdb
    * Just as with temp and regular tables, users can perform all Data Modification Language (DML) queries against a table variable:  SELECT, INSERT, UPDATE, and DELETE. 

An interesting limitation of table variables comes into play when executing code that involves a table variable.  A table variable's lifespan is only for the duration of the transaction that it runs in.  
     
An unofficial rule-of-thumb for usage is to use table variables for returning results from user-defined functions that return table values and to use temporary tables for storage and manipulation of temporary data; particularly when dealing with large amounts of data.  However, when lesser row counts are involved, and when indexing is not a factor, both table variables and temporary tables perform comparably.  It then comes down to preference of the individual responsible for the coding process.
Table variable use fewer resource than a temporary table because of there limited scope. Transactions touching table variables only last for the duration of the update on the table variable, so there is less locking and logging overhead. This is also gives better performance to the table variable as against the temporary table.
We can also declare constraint, primary key, identity columns, and default value in the table variable very easily. SQL server does not maintain any statistics on the table variable [Remember statistics are heavily used by the query optimizer to determine the best method to execute]. 
Also we cannot change the definition of the table after the table is created. This means that we cannot use the alter table statement on a table variable. If we are using a table variable in a join, you will need to alias the table in order to execute the query.