Showing posts with label SQL Server. Show all posts
Showing posts with label SQL Server. Show all posts

Monday 30 September 2013

Immediate Deadlock notifications in SQL Server


Deadlocks can be a pain to debug since they're so rare and unpredictable. The problem lies in repeating them in your dev environment. That's why it's crucial to have as much information about them from the production environment as possible.
There are two ways to monitor deadlocks, about which I'll talk about in the future posts. Those are SQL Server tracing and Error log checking. Unfortunately both of them suffer from the same thing: you don't know immediately when a deadlock occurs. Getting this info as soon as possible is sometimes crucial in production environments. Sure you can always set the trace flag 1222 on, but this still doesn't solve the immediate notification problem.
One problem for some might be that this method is only truly useful if you limit data access to stored procedures. <joke> So all you ORM lovers stop reading since this doesn't apply to you anymore! </joke>
The other problem is that it requires a rewrite of the problematic stored procedures to support it. However since SQL Server 2005 came out my opinion is that every stored procedure should have the try ... catch block implemented. There's no visible performance hit from this and the benefits can be huge. One of those benefits are the instant deadlocking notifications.

Needed "infrastructure"
So let's see how it done.  This must be implemented in the database you wish to monitor of course.
First we need a view that will get lock info about the deadlock that just happened. You can read why this type of query gives info we need in my previous post.
CREATE VIEW vLocks
AS
SELECT  L.request_session_id AS SPID,
        DB_NAME(L.resource_database_id) AS DatabaseName,
        O.Name AS LockedObjectName,
        P.object_id AS LockedObjectId,
        L.resource_type AS LockedResource,
        L.request_mode AS LockType,
        ST.text AS SqlStatementText,       
        ES.login_name AS LoginName,
        ES.host_name AS HostName,
        TST.is_user_transaction AS IsUserTransaction,
        AT.name AS TransactionName   
FROM    sys.dm_tran_locks L
        LEFT JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
        LEFT JOIN sys.objects O ON O.object_id = P.object_id
        LEFT JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
        LEFT JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
        LEFT JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
        LEFT JOIN sys.dm_exec_requests ER ON AT.transaction_id = ER.transaction_id
        CROSS APPLY sys.dm_exec_sql_text(ER.sql_handle) AS ST
WHERE   resource_database_id = db_id()
GO
Next we have to create our stored procedure template:
CREATE PROC <ProcedureName>
AS
  BEGIN TRAN
    BEGIN TRY

      <SPROC TEXT GOES HERE>

    COMMIT
  END TRY
  BEGIN CATCH
    -- check transaction state
    IF XACT_STATE() = -1
    BEGIN
      DECLARE @message xml
      -- get our deadlock info FROM the VIEW
      SET @message = '<TransactionLocks>' + (SELECT * FROM vLocks ORDER BY SPID FOR XML PATH('TransactionLock')) + '</TransactionLocks>'

      -- issue ROLLBACK so we don't ROLLBACK mail sending
      ROLLBACK

      -- get our error message and number
      DECLARE @ErrorNumber INT, @ErrorMessage NVARCHAR(2048)
      SELECT @ErrorNumber = ERROR_NUMBER(), @ErrorMessage = ERROR_MESSAGE()

      -- if it's deadlock error send mail notification
      IF @ErrorNumber = 1205
      BEGIN
        DECLARE @MailBody NVARCHAR(max)
        -- create out mail body in the xml format. you can change this to your liking.
        SELECT  @MailBody = '<DeadlockNotification>'
                            + 
                            (SELECT 'Error number: ' + isnull(CAST(@ErrorNumber AS VARCHAR(5)), '-1') + CHAR(10) +
                                    'Error message: ' + isnull(@ErrorMessage, ' NO error message') + CHAR(10)
                             FOR XML PATH('ErrorMeassage'))
                            +
                            CAST(ISNULL(@message, '') AS NVARCHAR(MAX))
                            +
                            '</DeadlockNotification>'
        -- for testing purposes
        -- SELECT CAST(@MailBody AS XML)

        -- send an email with the defined email profile.
        -- since this is async it doesn't halt execution
        EXEC msdb.dbo.sp_send_dbmail
                       @profile_name = 'your mail profile',
                       @recipients = 'dba@yourCompany.com',
                       @subject = 'Deadlock occured notification',
                       @body = @MailBody;
      END
    END
  END CATCH
GO
The main part of this stored procedure is of course the CATCH block. The first line in there is check of the XACT_STATE() value. This is a scalar function that reports the user transaction state. -1 means that the transaction is uncommittable and has to be rolled back. This is the state of the victim transaction in the internal deadlock killing process. Next we read from our vLocks view to get the full info (SPID, both SQL statements text, values, etc...) about both SPIDs that created a deadlock. This is possible since our deadlock victim transaction hasn't been rolled back yet and the locks are still present. We save this data into an XML message. Next we rollback our transaction to release locks. With error message and it's corresponding number we check if the error is 1205 - deadlock and if it is we send our message in an email. How to configure database mail can be seen here.
Both the view and the stored procedures template can and probably should be customized to suit your needs.

Testing the theory
Let's try it out and see how it works with a textbook deadlock example that you can find in every book or tutorial.
-- create our deadlock table with 2 simple rows
CREATE TABLE DeadlockTest ( id INT)
INSERT INTO DeadlockTest
SELECT 1 UNION ALL
SELECT 2
GO
Next create two stored procedures (spProc1 and spProc2) with our template:
For spProc1 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 12
WHERE id = 2
  
-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 11
WHERE id = 1

For spProc2 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 11
WHERE id = 1

-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 12
WHERE id = 2

Next open 2 query windows in SSMS:
In window 1 run put this script:
exec spProc1
In window 2 put this script:
exec spProc2

Run the  script in the first window and after a second or two run the script in the second window. A deadlock will happen and a few moments after the victim transaction fails you should get the notification mail. Mail profile has to be properly configured of course.
The resulting email should contain an XML with full info about the deadlock. You can view it by commenting msdb.dbo.sp_send_dbmail execution and uncommenting the SELECT CAST(@MailBody AS XML) line

Monday 16 September 2013

SQL Server script to rebuild all indexes for all tables and all databases

One of the main functions of a DBA is to maintain database indexes.  There have been several tips written about different commands to use for both index rebuilds and index defrags as well as the differences between index maintenance with SQL Server.  In addition, other tips have been written about using maintenance plans to maintain indexes on all databases.  One of the issues with maintenance plans is that they don't always seem to be as reliable as you would hope and you also sometimes get false feedback on whether the task actually was successful or not.  In this tip we look at a simple script that could be used to rebuild all indexes for all databases.
The one nice thing about maintenance plans is that it works across multiple databases and therefore you can push out one task to handle the same activity across all of your databases.  The problem that I have seen with maintenance plans is that sometimes they do not work as expected, therefore here is another approach.
The script below allows you to rebuild indexes for all databases and all tables within a database.  This could be further tweaked to handle only indexes that need maintenance as well as doing either index defrags or index rebuilds.
The script uses two cursors one for the databases and another for the tables within the database.  In addition, it uses the INFORMATION_SCHEMA.TABLES view to list all of the tables within a database. 
Because we need to change from database to database we also need to create dynamic SQL code for the queries.  For the DBCC DBREINDEX option we can just pass in the parameters, but for the ALTER INDEX statement we need to build the query dynamically.  Here is the script.

DECLARE @Database VARCHAR(255)   
DECLARE @Table VARCHAR(255)  
DECLARE @cmd NVARCHAR(500)  
DECLARE @fillfactor INT 
SET @fillfactor = 90 
DECLARE DatabaseCursor CURSOR FOR  
SELECT name FROM master.dbo.sysdatabases   
WHERE name NOT IN ('master','msdb','tempdb','model','distribution')   
ORDER BY 1  
OPEN DatabaseCursor  
FETCH NEXT FROM DatabaseCursor INTO @Database  
WHILE @@FETCH_STATUS = 0  
BEGIN  
   SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT ''['' + table_catalog + ''].['' + table_schema + ''].['' + 
  table_name + '']'' as tableName FROM [' + @Database + '].INFORMATION_SCHEMA.TABLES 
  WHERE table_type = ''BASE TABLE'''   
-- create table cursor  
EXEC (@cmd)  
OPEN TableCursor   
FETCH NEXT FROM TableCursor INTO @Table   
WHILE @@FETCH_STATUS = 0   
BEGIN   
       IF (@@MICROSOFTVERSION / POWER(2, 24) >= 9)
BEGIN
-- SQL 2005 or higher command 
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')' 
EXEC (@cmd) 
END
       ELSE
       BEGIN
-- SQL 2000 command 
DBCC DBREINDEX(@Table,' ',@fillfactor)  
END
       FETCH NEXT FROM TableCursor INTO @Table   
END   
   CLOSE TableCursor   
DEALLOCATE TableCursor  
FETCH NEXT FROM DatabaseCursor INTO @Database  
END  
CLOSE DatabaseCursor   
DEALLOCATE DatabaseCursor



The script will work for both SQL 2000 and higher versions.  For SQL 2000 it uses DBREINDEX and for SQL Server 2005 and higher it uses ALTER INDEX.  Thanks go out to LittlePanthers for providing the code snippet to check the version of SQL Server.

Also, I have excluded the system databases, so you can include these or also add other databases to exclude from you index maintenance routines.
 
source http://www.mssqltips.com/sqlservertip/1367/sql-server-script-to-rebuild-all-indexes-for-all-tables-and-all-databases







Using the SQL Server APP_NAME function to control stored procedure execution

Logic reusability is one of the most practiced aspects of database development. For example query / business logic developed in stored procedures for one application (say a .NET page for example), can be easily reused by another application (a SSIS ETL package or for a SSRS report for example). Many times, this is not the intention. Provided that the application can make a connection and has the required privileges to connect to the database and use the database objects, one needs to still have a level of control over the database object to check the scope of the application and facilitate logic execution based on that specific need.
In this tip we will look at one way to achieve control of stored procedures to ensure that reuse is for the intended purpose and changes do not break other applications that may be using this same code. Typically in a solution development life-cycle, an application starts with front-end development with a back-end database. A database would contain database objects to host as well as query data.
Generally a standard practice is that an application ID is created at a solution level. This ID is a Windows ID and meant to be used by all the front-end components of a solution to connect to the database using Windows integrated security and fetch the necessary data. Users would connect to the application using their own credentials and to facilitate data based on the role of the user, the application would connect to the database using the application ID.
For example, if the solution has components like a front-end, web services, ETL packages, reports etc..., then all would be connecting to the database using the same application ID. Now consider the scenario that a typical stored procedure was created to be used only by web services. Other teams can see this SP and intend on using this SP for their component. So how do we make sure that even if a database user has privileges on the stored procedure it should execute only for the application that it's targeted for?
Using Application Name
One of the easiest solutions in this case is by setting the "Application Name" property in the connection string and verifying this name in the SP using the "App_Name" SQL Server system function. To test this scenario, follow the steps below.
Step 1
Open SSMS and create a stored procedure in the database of your choice as shown in the below screenshot. In my case I have created this stored procedure in the AdventureWorks database. This procedure will check the application name returned by the connection and return the name of the application.
Execute this stored procedure from SSMS and check the result:
Open SSMS, and create a procedure in the database of your choice
Step 2
Open SQL Server Data Tools and create a new report project. Add a new report to the project and create a new connection.
If you browse to the Advanced settings of the connecting string dialog box you will find a connecting string parameter named "Application Name". Set the value of this property to "App SSRS Reports" and click OK.
By setting this property value, a parameter named "Application Name" will be added to the connection string as shown in the below screenshot.
Open SSDT, and create a new report project
Step 3
Create a dataset using this connection and use this on the report. Execute the report and you should get something similar to the below screenshot.
Create a dataset using this connection
Using the APP_NAME function it is possible to add checks into SPs whether the call is made by the intended application and depending upon the application name the appropriate decision can be made whether to execute the logic for the execution request. Your stored procedure can easily be modified to check the application name and if it is the intended application the rest of the code in the stored procedure executes if not the procedure would skip the logic and just return.
This function can be also useful for logging multiple applications that share the same SP. You can create a table to collect this data and then use this data to analyze how this SP is being used by all of the applications that utilize this stored procedure. 

Next Steps

  • Create different versions of the same report and use the version of the report in the application name parameter of the connection string.
  • In the SP that provides data to the report, use the APP_NAME function to allow only selected versions of the report to execute the query logic inside the stored procedure.

http://www.mssqltips.com/sqlservertip/2897/using-the-sql-server-appname-function-to-control-stored-procedure-execution


















Friday 9 August 2013

SQL Server Query Stress Testing Tool

The SQLQueryStress tool provided by Adam Machanic can be used to apply additional stress when testing your stored procedures. This tool can also be used to apply a dataset as random parameter values when testing your stored procedures. You can also read more about SQLStressTest on the tools documentation page to find more details about how you can use the tool.
For the purpose of this tip we are going to use the uspGetEmployeeManagers stored procedure in the AdventureWorks2008R2 database. You can exchange this with your stored procedure to walk through this tip in your own environment.

Step 1 – Query/Stored procedure setup

Our first step is to test the following stored procedure with a test parameter. This is done in Management Studio using the query shown below.
exec [dbo].[uspGetEmployeeManagers] @BusinessEntityID = 8
GO

Now that we know we have a working stored procedure and a valid parameter that returns data we can get started with the SQLStressTool. Once you downloaded and installed SQLQueryStress, fire the tool up and  paste in the code that you used in Management Studio. Next, we need to click on the database button to configure our database connection.
SQLQueryStress

Step 2 - Configure Database Connectivity


Now that we clicked on the database button we will want to connect to our AdventureWorks database. In this example I am using a instance named “r2” on my localhost. We will connect with windows authentication and our default database will be AdventureWorks2008R2. Once this is done we will click on Test Connection and click on the "OK" box in the popup window. We'll see the Connection Succeeded message to verify that our connection settings are connect.
 Configure Database Connectivity

Step 3 - Clear Proc Cache


Before we execute our stored procedure using SQLQueryStress we are going to clear out the procedure cache so we can track the total executions of our stored procedure. This shouldn’t be done on a production system as this can cause significant performance problems. You would have to recompile all user objects to get them back into the procedure cache. We are doing this in this walkthrough tip to show you how we can count the total executions of the stored procedure.
NOTE: In SQL Server 2008 and up you can actually clear a specific plan from the buffer pool. In this example we are clearing out all plan’s incase your using SQL 2005. Once again, this shouldn't be done on a production system . Please see BOL for a specific example on clearing out a single plan.
-- FREEPROCCACHE will purge all cached procedures from the procedure cache --
-- Starting in SQL Server 2008 you can purge a specific plan from plan cache --
DBCC FREEPROCCACHE
GO

Step 4 - Execute Stored Procedure Using SQLQueryStress
Now that we have established our connection and specified a default database we are going to execute our stored procedure specified in step one. You can execute the stored procedure once by making sure the number of interations and number of threads both have the value of “one.” We will go over these options in more details a little later in the tip. Once those values are set correctly you can execute the stored procedure once by clicking on the “GO” button on the top right side of the SQLQueryStress tool.
Execute Stored Procedure using SQLQueryStress
Once the stored procedure execution completes you will see that statistics are generated to help give you valuable feedback towards your workload. You can see the iterations that completed. In this case we only executed the stored procedure once. You can also see valuable information for actual seconds, CPU, Logical reads and elapsed time as shown in the screen shot below.
You can also see valuable information for actual seconds, CPU, Logical reads and elapsed time

Step 5 - View Total Executions via T-SQL

Now we will execute the following T-SQL script below, which will give us the execution count for our stored procedure. We just cleared the procedure cache so you will get an execution count of one as shown in the screen shot below.
SELECT  DB_NAME(st.dbid) DBNamee
,OBJECT_SCHEMA_NAME(st.objectid,dbid) SchemaName
,OBJECT_NAME(st.objectid,dbid) StoredProcedure
,MAX(cp.usecounts) Execution_count
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
WHERE DB_NAME(st.dbid) IS NOT NULL 
AND cp.objtype = 'proc'
AND OBJECT_NAME(st.objectid, dbid) LIKE 'uspGetEmployeeManagers'
GROUP BY cp.plan_handle, DB_NAME(st.dbid),
OBJECT_SCHEMA_NAME(objectid,st.dbid),
OBJECT_NAME(objectid,st.dbid)
ORDER BY MAX(cp.usecounts) DESC
GO



View Total executions via T-SQL

Step 6 - Using SQLQueryStress to Add Additional Stress with Multiple Threads.


Now that we have gone over the basics of executing a stored procedure with SQLQueryStress we will go over adding additional stress by changing the values for Number of Threads and Number of Iterations. The number of Iterations means the query will be executed this amount of times for each thread that is specified. The numbers of threads specify how many concurrent threads (SPIDS) will be used to execute the number of iterations.
Using SQLQueryStress to add additional stress with multiple threads
Being that we changed the number of iterations to five and the number of threads to five we will expect the total number of iterations completed to be twenty-five. The iterations completed is twenty-five because we used five threads and had five iterations that were executed for each thread.  Below is a screen shot of the the workload completed after we clicked on the “GO” button with valuable average statistics during the workload.
The iterations completed is twenty-five because we used five threads and had five iterations that were executed for each thread
If we rerun our T-SQL script from step 5, you will see that there is a total of twenty-six executions for the uspGetEmployeeManagers stored procedure. This includes our initial execution from step 4 and the additional stress applied in step 6.
rerun our T-Sql script from step #5

Step 7 - Use Random Values for Parameters with SQLQueryStress


Next, we are going to cover using a dataset to randomly provide parameters to our stored procedure. Currently we use a hard coded value of eight as the value for the BusinessEntityID parameter. Now, we are going to click on the “Parameter Substitution” button to use a T-SQL script to create a pool of values that will be used during our stress testing of the uspGetEmployeeManagers stored procedure.
Use Random values for Parameters with SQLQueryStress
Once the parameter substitution window opens we will want to copy our T-SQL statement provided below that will generate the BusinessEntityID values we would want to pass into our stored procedure.
SELECT [BusinessEntityID]]
FROM [AdventureWorks2008R2].[HumanResources].[Employee]
GO

Once you added the T-SQL script, you would want to select the column you would like to map to the parameter used for your stored procedure.
Once you added the T-SQL script you would want to select the column
Finally, the last part of this step is to drop the hard coded value assignment for the stored procedure. This way the parameter substitution will be used for the parameter value.
the parameter substitution will be used for the parameter value.

Step 8 - Wrap-up Results


To wrap up this tip, we have gone over controlling a workload to provide additional stress and randomly substituting parameters to be used for your workload replay. If you capture a SQL trace and replay the workload you should see a similar output as the one provided in the screen shot below. Looking at the screen shot below you will notice that each thread (SPID) has five iterations. Also, you will notice that the values for the businessentityid provided are randomly selected from our block of code provided for the parameter substitution.
using the  SQLQueryStress tool
 
For more info  http://www.mssqltips.com/sqlservertip/2730/sql-query-stress-tool/






























Azure AzCopy Command in Action

Azure AzCopy Command  in Action -  Install - Module - Name Az - Scope CurrentUser - Repository PSGallery - Force # This simple PowerShell ...