Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Monday 16 June 2014

See SQL Server Backup File Date and Time

I want to be able to see when a backup file was created. Does SQL Server provide a way to add the current date and time to my backup file filenames?


SQL Server records the date and time inside the backup file, but to see the information, you have to look in the backup file by using the following statement:

RESTORE HEADERONLY FROM DISK =  N'c:\temp\TEST-201406-192507.bak'

image



This statement returns the BackupStartDate and BackupFinishDate as columns.


Another methods using history tables is given below.


SELECT [rs].[destination_database_name]
,[rs].[restore_date]
,[bs].[backup_start_date]
,[bs].[backup_finish_date]
,[bs].[database_name] AS [source_database_name]
,[bmf].[physical_device_name] AS [backup_file_used_for_restore]
FROM msdb..restorehistory rs
INNER JOIN msdb..backupset bs ON [rs].[backup_set_id] = [bs].[backup_set_id]
INNER JOIN msdb..backupmediafamily bmf ON [bs].[media_set_id] = [bmf].[media_set_id]
ORDER BY [rs].[restore_date] DESC

However, this method doesn't let you easily identify when a backup file was created.Many people want to display the date and time in the file system name of the backup file so that they can easily see the backups ordered in time.Following scripts creates a dynamic SQL statement that makes a backup of a database and encodes the current date and time in the backup filename.The script creates a filename for the backup in the format databasename-YYYYMMDD-HHMMSS .bak. In addition, the script adds a leading zero to the time elements (hours, minutes, and seconds) so that 1:02 A.M. shows as 010200 instead of 10200.The leading zero ensures that the filenames will sort in the correct order in the file system. Note that the script assumes the C:\backup directory exists, so you need to change the directory to put the filenames into the correct path for your environment.


DECLARE @FileName NVARCHAR(256)    ,@NSQL NVARCHAR(4000)
SELECT @FileName = 'c:\temp\'
+ db_name()
+ N'-' + CONVERT(NCHAR(6), getdate(), 112)
+ N'-' + right(N'0' + rtrim(CONVERT(NCHAR(2), datepart(hh, getdate()))), 2)
+ right(N'0' + rtrim(CONVERT(NCHAR(2), datepart(mi, getdate()))), 2)
+ right(N'0' + rtrim(CONVERT(NCHAR(2), datepart(ss, getdate()))), 2) + N'.bak'
PRINT @FileName
SELECT @NSQL = 'BACKUP DATABASE ' + QUOTENAME(db_name(), '[') + ' TO DISK = ''' + @FileName + ''''
PRINT @NSQL
EXEC (@NSQL)
Reference http://www.mssqltips.com/sqlservertip/1150/what-is-in-your-sql-server-backup-files/
 

Monday 30 September 2013

Immediate Deadlock notifications in SQL Server


Deadlocks can be a pain to debug since they're so rare and unpredictable. The problem lies in repeating them in your dev environment. That's why it's crucial to have as much information about them from the production environment as possible.
There are two ways to monitor deadlocks, about which I'll talk about in the future posts. Those are SQL Server tracing and Error log checking. Unfortunately both of them suffer from the same thing: you don't know immediately when a deadlock occurs. Getting this info as soon as possible is sometimes crucial in production environments. Sure you can always set the trace flag 1222 on, but this still doesn't solve the immediate notification problem.
One problem for some might be that this method is only truly useful if you limit data access to stored procedures. <joke> So all you ORM lovers stop reading since this doesn't apply to you anymore! </joke>
The other problem is that it requires a rewrite of the problematic stored procedures to support it. However since SQL Server 2005 came out my opinion is that every stored procedure should have the try ... catch block implemented. There's no visible performance hit from this and the benefits can be huge. One of those benefits are the instant deadlocking notifications.

Needed "infrastructure"
So let's see how it done.  This must be implemented in the database you wish to monitor of course.
First we need a view that will get lock info about the deadlock that just happened. You can read why this type of query gives info we need in my previous post.
CREATE VIEW vLocks
AS
SELECT  L.request_session_id AS SPID,
        DB_NAME(L.resource_database_id) AS DatabaseName,
        O.Name AS LockedObjectName,
        P.object_id AS LockedObjectId,
        L.resource_type AS LockedResource,
        L.request_mode AS LockType,
        ST.text AS SqlStatementText,       
        ES.login_name AS LoginName,
        ES.host_name AS HostName,
        TST.is_user_transaction AS IsUserTransaction,
        AT.name AS TransactionName   
FROM    sys.dm_tran_locks L
        LEFT JOIN sys.partitions P ON P.hobt_id = L.resource_associated_entity_id
        LEFT JOIN sys.objects O ON O.object_id = P.object_id
        LEFT JOIN sys.dm_exec_sessions ES ON ES.session_id = L.request_session_id
        LEFT JOIN sys.dm_tran_session_transactions TST ON ES.session_id = TST.session_id
        LEFT JOIN sys.dm_tran_active_transactions AT ON TST.transaction_id = AT.transaction_id
        LEFT JOIN sys.dm_exec_requests ER ON AT.transaction_id = ER.transaction_id
        CROSS APPLY sys.dm_exec_sql_text(ER.sql_handle) AS ST
WHERE   resource_database_id = db_id()
GO
Next we have to create our stored procedure template:
CREATE PROC <ProcedureName>
AS
  BEGIN TRAN
    BEGIN TRY

      <SPROC TEXT GOES HERE>

    COMMIT
  END TRY
  BEGIN CATCH
    -- check transaction state
    IF XACT_STATE() = -1
    BEGIN
      DECLARE @message xml
      -- get our deadlock info FROM the VIEW
      SET @message = '<TransactionLocks>' + (SELECT * FROM vLocks ORDER BY SPID FOR XML PATH('TransactionLock')) + '</TransactionLocks>'

      -- issue ROLLBACK so we don't ROLLBACK mail sending
      ROLLBACK

      -- get our error message and number
      DECLARE @ErrorNumber INT, @ErrorMessage NVARCHAR(2048)
      SELECT @ErrorNumber = ERROR_NUMBER(), @ErrorMessage = ERROR_MESSAGE()

      -- if it's deadlock error send mail notification
      IF @ErrorNumber = 1205
      BEGIN
        DECLARE @MailBody NVARCHAR(max)
        -- create out mail body in the xml format. you can change this to your liking.
        SELECT  @MailBody = '<DeadlockNotification>'
                            + 
                            (SELECT 'Error number: ' + isnull(CAST(@ErrorNumber AS VARCHAR(5)), '-1') + CHAR(10) +
                                    'Error message: ' + isnull(@ErrorMessage, ' NO error message') + CHAR(10)
                             FOR XML PATH('ErrorMeassage'))
                            +
                            CAST(ISNULL(@message, '') AS NVARCHAR(MAX))
                            +
                            '</DeadlockNotification>'
        -- for testing purposes
        -- SELECT CAST(@MailBody AS XML)

        -- send an email with the defined email profile.
        -- since this is async it doesn't halt execution
        EXEC msdb.dbo.sp_send_dbmail
                       @profile_name = 'your mail profile',
                       @recipients = 'dba@yourCompany.com',
                       @subject = 'Deadlock occured notification',
                       @body = @MailBody;
      END
    END
  END CATCH
GO
The main part of this stored procedure is of course the CATCH block. The first line in there is check of the XACT_STATE() value. This is a scalar function that reports the user transaction state. -1 means that the transaction is uncommittable and has to be rolled back. This is the state of the victim transaction in the internal deadlock killing process. Next we read from our vLocks view to get the full info (SPID, both SQL statements text, values, etc...) about both SPIDs that created a deadlock. This is possible since our deadlock victim transaction hasn't been rolled back yet and the locks are still present. We save this data into an XML message. Next we rollback our transaction to release locks. With error message and it's corresponding number we check if the error is 1205 - deadlock and if it is we send our message in an email. How to configure database mail can be seen here.
Both the view and the stored procedures template can and probably should be customized to suit your needs.

Testing the theory
Let's try it out and see how it works with a textbook deadlock example that you can find in every book or tutorial.
-- create our deadlock table with 2 simple rows
CREATE TABLE DeadlockTest ( id INT)
INSERT INTO DeadlockTest
SELECT 1 UNION ALL
SELECT 2
GO
Next create two stored procedures (spProc1 and spProc2) with our template:
For spProc1 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 12
WHERE id = 2
  
-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 11
WHERE id = 1

For spProc2 replace <SPROC TEXT GOES HERE> in the template with:
UPDATE DeadlockTest
SET id = 11
WHERE id = 1

-- wait 5 secs TO SET up deadlock condition IN other window
WAITFOR DELAY '00:00:05'

UPDATE DeadlockTest
SET id = 12
WHERE id = 2

Next open 2 query windows in SSMS:
In window 1 run put this script:
exec spProc1
In window 2 put this script:
exec spProc2

Run the  script in the first window and after a second or two run the script in the second window. A deadlock will happen and a few moments after the victim transaction fails you should get the notification mail. Mail profile has to be properly configured of course.
The resulting email should contain an XML with full info about the deadlock. You can view it by commenting msdb.dbo.sp_send_dbmail execution and uncommenting the SELECT CAST(@MailBody AS XML) line

Monday 16 September 2013

Using the SQL Server APP_NAME function to control stored procedure execution

Logic reusability is one of the most practiced aspects of database development. For example query / business logic developed in stored procedures for one application (say a .NET page for example), can be easily reused by another application (a SSIS ETL package or for a SSRS report for example). Many times, this is not the intention. Provided that the application can make a connection and has the required privileges to connect to the database and use the database objects, one needs to still have a level of control over the database object to check the scope of the application and facilitate logic execution based on that specific need.
In this tip we will look at one way to achieve control of stored procedures to ensure that reuse is for the intended purpose and changes do not break other applications that may be using this same code. Typically in a solution development life-cycle, an application starts with front-end development with a back-end database. A database would contain database objects to host as well as query data.
Generally a standard practice is that an application ID is created at a solution level. This ID is a Windows ID and meant to be used by all the front-end components of a solution to connect to the database using Windows integrated security and fetch the necessary data. Users would connect to the application using their own credentials and to facilitate data based on the role of the user, the application would connect to the database using the application ID.
For example, if the solution has components like a front-end, web services, ETL packages, reports etc..., then all would be connecting to the database using the same application ID. Now consider the scenario that a typical stored procedure was created to be used only by web services. Other teams can see this SP and intend on using this SP for their component. So how do we make sure that even if a database user has privileges on the stored procedure it should execute only for the application that it's targeted for?
Using Application Name
One of the easiest solutions in this case is by setting the "Application Name" property in the connection string and verifying this name in the SP using the "App_Name" SQL Server system function. To test this scenario, follow the steps below.
Step 1
Open SSMS and create a stored procedure in the database of your choice as shown in the below screenshot. In my case I have created this stored procedure in the AdventureWorks database. This procedure will check the application name returned by the connection and return the name of the application.
Execute this stored procedure from SSMS and check the result:
Open SSMS, and create a procedure in the database of your choice
Step 2
Open SQL Server Data Tools and create a new report project. Add a new report to the project and create a new connection.
If you browse to the Advanced settings of the connecting string dialog box you will find a connecting string parameter named "Application Name". Set the value of this property to "App SSRS Reports" and click OK.
By setting this property value, a parameter named "Application Name" will be added to the connection string as shown in the below screenshot.
Open SSDT, and create a new report project
Step 3
Create a dataset using this connection and use this on the report. Execute the report and you should get something similar to the below screenshot.
Create a dataset using this connection
Using the APP_NAME function it is possible to add checks into SPs whether the call is made by the intended application and depending upon the application name the appropriate decision can be made whether to execute the logic for the execution request. Your stored procedure can easily be modified to check the application name and if it is the intended application the rest of the code in the stored procedure executes if not the procedure would skip the logic and just return.
This function can be also useful for logging multiple applications that share the same SP. You can create a table to collect this data and then use this data to analyze how this SP is being used by all of the applications that utilize this stored procedure. 

Next Steps

  • Create different versions of the same report and use the version of the report in the application name parameter of the connection string.
  • In the SP that provides data to the report, use the APP_NAME function to allow only selected versions of the report to execute the query logic inside the stored procedure.

http://www.mssqltips.com/sqlservertip/2897/using-the-sql-server-appname-function-to-control-stored-procedure-execution


















Friday 8 March 2013

Refreshing SQL Server views


System stored proc sp_refreshview updates the metadata for the specified non-schema-bound view. Persistent metadata for a view can become outdated because of changes to the underlying objects upon which the view depends. If a view is not created with schemabinding,

This stored proc should be run when changes are made to the objects underlying the view that affect the definition of the view. Otherwise, the view might produce unexpected results when it is queried. User should requires ALTER permission on the view and REFERENCES permission on common language runtime (CLR) user-defined types and XML schema collections that are referenced by the view columns.

Although, if views are using * instead of columns name then sp_refreshview will not produce any error after deleting some columns from base table.

The following example refreshes the metadata for the view Sales.vIndividualCustomer.
USE AdventureWorks2012;
GO
EXECUTE sp_refreshview N'Sales.vIndividualCustomer';
Creating a script that updates all views that have dependencies on a changed object

Assume that the table Person.Person was changed in a way that would affect the definition of any views that are created on it. The following example creates a script that refreshes the metadata for all views that have a dependency on table Person.Person.

USE AdventureWorks2012;
GO
SELECT DISTINCT 'EXEC sp_refreshview ''' + name + ''''
FROM sys.objects AS so
INNER JOIN sys.sql_expression_dependencies AS sed
    ON so.object_id = sed.referencing_id
WHERE so.type = 'V' AND sed.referenced_id = OBJECT_ID('Person.Person');

Azure AzCopy Command in Action

Azure AzCopy Command  in Action -  Install - Module - Name Az - Scope CurrentUser - Repository PSGallery - Force # This simple PowerShell ...