Search This Blog & Web

Thursday, May 29, 2014

SQL Server Storage Enigne: Update Statement Life Cycle Summary

From the previous blog
SQL Server Storage Engine: Select Statement life cycle summary

Now we will start learning about What happens in database engine when an update statement will be issued. Same structure follows as with select statement for SNI, and Rational Engine and Access Method part, as we can see till Point 5 from the previous post and we can also see this in attached picture.

At the point of Access Method as in Select statement It goes to Buffer Manager here it goes to Transaction Manager.

Transaction Manager

Write-ahead logging (WAL): The Access Methods code requests that the changes it wants to make are logged, and the Log Manager writes the changes to the transaction log.

Writing to the transaction log is the only part of a data modification transaction that always needs a physical write to disk because SQL Server depends on being able to reread that change in the event of system failure.

What’s actually stored in the transaction log is details of the page changes that occurred as the result of a modification statement. This is all that SQL Server needs in order to undo any change.

The actual data modification can only be performed when confirmation is received that the operation has been physically written to the transaction log. This is why transaction log performance is so crucial.

Once confirmation is received by the Access Methods, it passes the modification request on to the Buffer Manager to complete.

Buffer Manager

The page that needs to be modified is already in cache, the page is modified in the cache, and confirmation is sent back to Access Methods and ultimately to the client.

The key point here is that the UPDATE statement has changed the data in the data cache, not in the actual database file on disk.

This is done for performance reasons, and the page is now what’s called a dirty page because it’s different in memory from what’s on disk.

These dirty pages will be written back to the database file periodically whenever the free buffer list is low or a checkpoint occurs.

Tuesday, May 27, 2014

SQL Server Storage Enigne: SELECT Statement Life Cycle Summary

Figure shows the whole life cycle of a SELECT query, described here:

1. The SQL Server Network Interface (SNI) on the client established a connection to the SNI on the SQL Server using a network protocol such as TCP/IP. It then created a connection to a TDS endpoint over the TCP/IP connection and sent the SELECT statement to SQL Server as a TDS (Tabular data stream) message.

2. The SNI on the SQL Server unpacked the TDS message, read the SELECT statement, and passed a “SQL Command” to the Command Parser.

3. The Command Parser checked the plan cache in the buffer pool for an existing, usable query plan that matched the statement received. When it didn’t find one, it created a query tree based on the SELECT statement and passed it to the Optimizer to generate a query plan.

4. The Optimizer generated a “zero cost” or “trivial” plan in the pre-optimization phase because the statement was so simple. The query plan created was then passed to the Query Executor for execution.
    Next steps involve in making execution plan will be

         Phase 0 — During this phase the optimizer looks at nested loop joins and won’t consider            parallel operators.The optimizer will stop here if the cost of the plan it has found is < 0.2. A plan generated at this phase is known as a "transaction processing", or TP, plan.
         Phase 1 — it uses subset of the possible optimization rules and looks for common patterns for which it already has a plan. The optimizer will stop here if the cost of the plan it has found is < 1.0. Plans generated in this phase are called "quick" plans.
         Phase 2 — This final phase is where the optimizer pulls out all the stops and is able to use all of its optimization rules. It also looks at parallelism and indexed views (if you’re running Enterprise Edition). Plans created in this phase have an optimization level of “Full.”

5. At execution time, the Query Executor determined that data needed to be read to complete the query plan so it passed the request to the Access Methods in the Storage Engine via an OLE DB interface.

6. The Access Methods needed to read a page from the database to complete the request from the Query Executor and asked the Buffer Manager to provision the data page.

7. The Buffer Manager checked the data cache to see if it already had the page in cache. It wasn’t in cache so it pulled the page from disk, put it in cache, and passed it back to the Access Methods.

8. Finally, the Access Methods passed the result set back to the Relational Engine to send to the client.

Tuesday, May 13, 2014

Capture Deadlock information and History using SQL Server

Every DBA has some tasks to prepare database health check report on different database related key points i.e. database backup size trend, database size trend, index utilization and fragmentation, dead lock information, statistics report, event and error log, cache hit ratio and so on. It is easy to get backup report, database sizes, indexes report, statistics report event and error log but when it comes to deadlock information this is most import question in every DBA’s mind. How can I get information about deadlocks that happened last night. I will explain other tasks detail step by step in different post but In this post I will try to explain about deadlock

What is Deadlock

Deadlocks happen when two or more processes trying to access same source and both are holding locks on them.  Both processes trying to do some action on same instance and both are waiting for each other to complete its action. It is hard to reproduce it in real time but I did it by force for testing purpose.
Think of going shopping for furniture and two people (Person A and Person B) go in at same time.  Unknown to them they both want the same goods (Good A and Good B).  One starts searching from right another person starts searching from left side of the store.  Person A finds Good A (Exclusive Lock) and starts looking for Good B (Shared Lock).  But while Person A was searching for Good A, Person B found Good B (Exclusive Lock) and starts Looking for Good A (Shared Lock).  When they find the other good they realize it is already reserved and cannot have it, neither Person A nor Person B is willing to let the other person have both the Goods. Therefore we are stuck in a deadlock.

Create a Deadlock

Let’s start this example by creating a new table as I created a table (abc) with three columns with some default data.

Sample code Starts transaction with Begin Trans but there is no commit or rollback transaction. It means this transaction is still doing some work and is busy and it has (Exclusive) lock on table (abc).

In this query window we are trying to delete data from same table that has a lock on it. This process goes on waiting state because of lock on this table and it will go into waiting state using (Share Lock). 
While both transactions are open and waiting to complete action by each other process. I execute another insert for table (abc) that will cause deadlock and with the help of Deadlock Monitor Thread (default method to resolve deadlock issue) one process will terminate to complete other.

Search a Deadlock

We have many ways to search deadlock info, some of them are listed below

Windows Performance Monitor 

SQL Server Profiler is used to get Server Side TraceEvent Class: LocksEvent Name: Deadlock generates an XML Graph.  Very easy to read and figure out what is going on. Windows performance monitor provides all the deadlocks that have happened on your server since the last restart.  To get dead lock information using following query

SELECT cntr_value AS NumOfDeadLocks,*
  FROM sys.dm_os_performance_counters
 WHERE object_name = 'SQLServer:Locks'
   AND counter_name = 'Number of Deadlocks/sec'

   AND instance_name = '_Total'

Trace Flags1204 and 1222

Trace flag 1204 has existed since at least SQL Server 2000.  And Trace Flag 1222 was introduced in SQL Server 2005.Both output Deadlock Information to the SQL Server ERRORLOG. You can enable trace flag setting using following query.

DBCC TRACEON (1222,-1)

You can review trace status using following query

dbcc tracestatus (1222, 1204)

Once deadlock occurred it can be reviewed in error log. Using following query you can review deadlock in detail.

EXEC sp_readerrorlog

System Health

System Health is like Default trace that exists in the background.  System Health has been around since 2008, however was not visible in the GUI. Following query used to review deadlock information.

     xed.value('@timestamp', 'datetime2(3)') as CreationDate,
     xed.query('.') AS XEvent
     SELECT CAST([target_data] AS XML) AS TargetData
     FROM sys.dm_xe_session_targets AS st
     INNER JOIN sys.dm_xe_sessions AS s
            ON s.address = st.event_session_address
     WHERE = N'system_health'
                 AND st.target_name = N'ring_buffer'
) AS Data
CROSS APPLY TargetData.nodes('RingBufferTarget/event[@name="xml_deadlock_report"]') AS XEventData (xed)
ORDER BY CreationDate DESC

We can click on any of the XML links to get the complete XML information for deadlock and then save it as Filename.XDL file to get it in Deadlock Graph in SSMS.
System Health is great, however it doesn’t save deadlocks indefinitely.  So next let’s look at implementing Deadlock Monitoring using Extended Events, much like SQL Server Server Side Trace.

Extended Events

GUI Interface for Extended events is in SQL Server 2012 RTM. However, after you create the Extended Events Trace (Session) you can take the SQL code and run it against 2008 or 2008 R2 to create a similar trace.

Using extended event you can create a new Session. After setting up a Wizard you will get deadlock information in mentioned files.

Following is the query to create extended event session in 2008 and 2008 R2

 ADD EVENT sqlserver.xml_deadlock_report
 ADD TARGET package0.event_file(SET filename=N'C:\Deadlock\Deadlock_Monitor.xel')

Reviewing XML detail for deadlock, you need to see XML document as in picture.

From the attached picture you can see which two statements are involved in creating deadlock. In the <Owner> tab process lock and type is mentioned. “X” Exclusive for and “S” Shared in waiting state.

Deadlock Monitor Thread

In SQL Server to prevent this stalemate (i.e. a deadlock) from filling up the system, we have a Deadlock Monitor thread that is running in the background to “help” resolve deadlocks.

If we look at sys.dm_os_waiting_tasks we find a system task that is always waiting: REQUEST_FOR_DEADLOCK_SEARCH.  This thread wakes up every five seconds to see if we have any deadlocks.  If it finds any deadlocks, it terminates one of the sessions to keep all resources available for other session. But it is a little tricky how SQL Server decide by itself.
So one wonders what happens when they are the same?  It’s simple, SQL Server will kill whoever came in second.It wakes up every 5 seconds to check for deadlocks. If it finds any, it follows above process to decide how to resolve it.  However, first time it wakes up, it wakes up the second time right away, to make sure it is not a nested deadlock.  If there is one it kills that and goes back to sleep.  Next time it wakes up it wakes up 4.90 seconds (estimate on wake, I think it’s actually 10ms).  It will keep going down to as small as 100ms; that is, it will wake up 10 times per second to deal with Deadlock Issues.

Well, SQL Server does have to worry a bit to resolve this scenario and decided which session is going to kill.  However it has to make sure to kill the session that is the easiest to rollback.  Because if SQL Server kills a transactions, any work it has done must be rolled back to bring the database to consistent state.  It decided this looking at the LOG USED value. Whichever session generate in last have less used value for Log and it will be killed to release resources for other process.

List of available Microsoft SQL server updates from start to 2014

Here is the list of available Microsoft SQL server updates from 2000 to 2014.

Sunday, April 27, 2014

System databases, Backup procedure and restore statergies

Following are the system databases and their recommended backup procedures

·         master Database
Records all the system-level information for an instance of SQL Server. Symantec keys and Certificates information. Back up master as often as necessary to protect the data sufficiently for your business needs. Best practices recommendation are for regular backup schedule, which you can supplement with an additional backup after a substantial update.
·        msdb Database
Is used by SQL Server Agent for scheduling alerts and jobs. Back up msdb whenever it is updated.
·        model Database
Is used as the template for all databases created on the instance of SQL Server. Modifications made to the model database, such as database size, collation, recovery model, and other database options, are applied to any databases created afterward. Back up model only when necessary for your business needs; for example, immediately after customizing its database options. Best practice:  We recommend that you create only full database backups of model, as required.
·        Resource Database
Is a read-only database that contains system objects that are included with SQL Server. System objects are physically persisted in the Resource database, but they logically appear in the sys schema of every database. The Resource database resides in the mssqlsystemresource.mdf file, which contains only code. Therefore, SQL Server cannot back up the Resource database.
·        tempdb Database
Is a workspace for holding temporary objects or intermediate result sets. You cannot back up the tempdb system database.
·        Configure Distribution
A database that exists only if the server is configured as a replication Distributor. This database stores metadata and history data for all types of replication, and transactions for transactional replication.

Database restore:
System databases
System databases can be restored only from backups that are created on the version of SQL Server that the server instance is currently running. For example, to restore a system database on a server instance that is running on SQL Server 2008 SP1, you must use a database backup that was created after the server instance was upgraded to SQL Server 2008 SP1.
To restore any database, the instance of SQL Server must be running. Startup of an instance of SQL Server requires that the master database is accessible and at least partly usable. If master becomes unusable, you can return the database to a usable state in either of the following ways:
·         Restore master from a current database backup.
·         Rebuild master completely
Performing a Complete Database Restore (Simple Recovery Model)
In a complete database restore, the goal is to restore the whole database. The whole database is offline for the duration of the restore. Before any part of the database can come online, all data is recovered to a consistent point in which all parts of the database are at the same point in time and no uncommitted transactions exist. Under the simple recovery model, the database cannot be restored to a specific point in time within a specific backup.
1.       A full database restore under the simple recovery model involves only one or two RESTORE statements, depending on whether a differential database backup has to be restored.

If you are using only a full database backup, just restore the most recent backup, as shown in the following illustration.

If you are also using a differential database backup, restore the most recent full database backup without recovering the database, and then restore the most recent differential database backup and recover the database. The following illustration shows this process.

When you are completely restoring a database, one restore sequence should be used. The following example shows the critical options in a restore sequence for the complete database restore scenario. A restore sequence consists of one or more restore operations that move data through one or more of the phases of restore. Syntax and details that are not relevant to this purpose are omitted.
Performing a Complete Database Restore (Full Recovery Model)
Under the full recovery model, the database can be restored to a specific point in time. The point in time can be the most recently available backup, a specific date and time, or a marked transaction.
Typically, recovering a database to the point of failure involves the following basic steps:
1.       Back up the active transaction log (known as the tail of the log). This creates a tail-log backup. If the active transaction log is unavailable, all transactions in that part of the log are lost.
2.       Restore the most recent full database backup without recovering the database (RESTORE DATABASE database_name FROM backup_device WITH NORECOVERY).
3.       If differential backups exist, restore the most recent one without recovering the database (RESTORE DATABASE database_name FROM differential_backup_device WITH NORECOVERY).
4.       Starting with the first transaction log backup that was created after the backup you just restored, restore the logs in sequence with NORECOVERY.
5.       Recover the database (RESTORE DATABASE database_name WITH RECOVERY). Alternatively, this step can be combined with restoring the last log backup.
6.       A complete database restore can usually be recovered to a point of time or marked transaction within a log backup. However, under the bulk-logged recovery model, if the log backup contains bulk-logged changes, point-in-time recovery is not possible. For more information, see Restoring a Database to a Point Within a Backup.
7.       The following illustration shows this process. After a failure occurs (1), a tail-log backup is created (2). Next, the database is restored to the point of the failure. This involves restoring a database backup, a subsequent differential backup, and every log backup taken after the differential backup, including the tail-log backup.

1.       When you are completely restoring a database, a single restore sequence should be used. The following example shows the critical options in a restore sequence for the complete database restore scenario in which the database is restored to the point of failure. A restore sequence consists of one or more restore operations that move data through one or more of the phases of restore. Syntax and details that are not relevant to this purpose are omitted.
The database is restored and rolled forward. A database differential is used to reduce roll-forward time. This restore sequence is intended to eliminate work loss; the last backup that is restored is a tail-log backup.

All this data is reference to different sources i.e Microsoft

Reading SQL Server Database Transaction Log

This is very exciting for me to query and look into Transaction log and see what is happening behind the scene. I will explain how to read your database transaction log file and how transactions are written for your database if you perform any database activity. There is an undocumented function called "fn_dblog" which enables you to read data from your transaction log which contains very informative data about things that are happening in your database.

The function (fn_dblog) requires a beginning LSN and ending LSN for a transaction. NULL is the default for this function and this will return all log records from the transaction log file.

Create a Database

To show how this works, we will create a database and a table to play with this function. Run the below SQL code to create a database and table.

--Create DB.
USE [master];
-- Create tables.
USE ReadLog;
    [Value] CHAR (25) DEFAULT 'Hoora');

We have created a database named "ReadLog" and a table 'Abc' with three columns. Now you can check all information and processes which have been used by SQL Server to create the database and table. We will run the below code to check the log file for this newly created database to check what processes and steps SQL Server took to create the database and table.

USE ReadLog;
select COUNT(*) from fn_dblog(null,null)

We can see there are 476 rows that have been generated for just creating a dummy database and a blank table. Look at the below code to see the data in the transaction log file as shown in above attached picture

USE ReadLog;
select [Current LSN],
       [Transaction Name],
       [Transaction ID],
       [Transaction SID],
       [Begin Time]
FROM   fn_dblog(null,null)

You can see in the above screenshot that the transaction name column shows the database name, similarly it will show the create table for the table creation code. Transaction ID is the same for all parts of a transaction. The value for transaction name will be filled only when the particular transaction starts with "LOP_BEGIN_XACT" in the Operation column. "LOP_BEGIN_XACT" means begin transaction. The operation column will let us know which operation is performing like an insert, update, delete, shrink, lock, page allocation etc...  It is pretty easy to understand the operation based on these key words to see what operation is being performed by SQL Server.

Looking at Insert Log

Now we will run a few DML scripts to check how data insertion, updating or deletion is logged in the database log file. During this operation you can also track how a page is allocated or de-allocated.
USE ReadLog
GO 100
SET Value=Manama'
WHERE [Sr.No]<5
WHERE [Sr.No]>90
Let's check our database log file again. As we saw from above, there is a lot of info logged in the transaction log file, so I will filter the data.
USE ReadLog
 [Current LSN],
 [Transaction ID],
  [Transaction Name],
 [Page ID],
 [Slot ID],
 [Begin Time],
 [End Time],
 [Number of Locks],
 [Lock Information]
FROM sys.fn_dblog(NULL,NULL)
WHERE Operation IN 

Your output will look something like the above screenshot after running the above script. From context it is easy to understand that table inserted as HEAP with Start and End Time. Page information, table info etc.

Similarly, it will show you this same kind of information for UPDATE and DELETE statements. You can have a lots of information from Log file like Page Split, Partition, Pages allocation etc.

Transaction Log behavior on Backup

Now I will run a backup and see the transaction log file again. Run a backup on this database and then again check the transaction log file.

Go USE ReadLog;
Select COUNT(*)
FROM   fn_dblog(null,null)
BACKUP DATABASE [ReadLog] TO  DISK = N'c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Backup\ReadLog.bak' WITH NOFORMAT, NOINIT,  
NAME = N'ReadLog-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
Select COUNT(*)
FROM   fn_dblog(null,null)

As we can see, the number of rows has been drastically reduced after doing a backup and it has been reduced to 9 rows from 151. This means the inactive part of the log which tracked the transactions has been dumped to a backup file and the original entries from the log file have been flushed. Now you can shrink the log file if necessary.

How we can see Log data other than this function

DBCC Log(): command to see log information, but this command will not give you detail information. 
Trace flag 2537: look at all logs and not just the active log.

Recover data from Log file.

Using this fucntion fn_DBLog() you can restore T-Log backup upto required LSN so that you can retrive you data up to that point. 

There are some interesting options to recover your data from log and log backups. I will write blog on these after testing. I gather all this data from different blogs as follows

Trace flag 2536: You can see inactive portion of the log if not truncated or backed up.

fn_dump_dblog: allows you to dump and search log records from a log backup file, without having to restore the database

Related Blogs from different users.

Monday, March 31, 2014

Accessing FoxPro data files from SQL Server with Link Server


My management gives assign me a task to find a way to communicate between two software. There is a finger tech application that record finger attendance and other application is .net based CRM. CRM has SQL Server as its backend but finger tech application is built in some other technology that needs to communicate with SQL Server to send data into CRM database.


I tried to find a solution by reading articles for Finger tech software but did not get any good solution. Only solution I found is we can export data from application using ODBC connection into SQL Server but that is a manual process.
While looking at its data files I came to know that it can be access or FoxPro files. I tried everything with access to export data into SQL Server or Import data from access using SQL Server. Then there is a blog that mention dbf files are FoxPro files and there is a provider available on Microsoft that needs to install to connect with SQL Server using SSIS or link server. I prefer link server because I need to add data from live database instead of some backup.

Step1: Microsoft OLE DB Provider for Visual FoxPro 9.0 

The Visual FoxPro OLE DB Provider (VfpOleDB.dll) exposes OLE DB interfaces that you can use to access Visual FoxPro databases and tables from other programming languages and applications.

Step2: Installing or upgrading OLEDB Provider

You can download this provider from Microsoft download center

On completion of downloading package you can find these two files in download folder.

After downloading package we need to run the setup file.

Once installation is successful. You can see new VFPOLEDB in SSMS provider list

Step3: Creating Link Server using FoxPro Provider

From the Link server tab in SSMS we need to create new Link server for FoxPro

Following values needs to be added

1- Name of the link server
2- Under Other data source, need to select Visual Fox Pro provider
3- Product Name is the name of you SQL Server database
4- In Data Source you need to mention path of your data files DBF. If you want to access only one file you can gives its path otherwise you need to provider folder path to access all other tables.
5- In Provider String, FoxPro dll provider is mentioned.
6- You can also generate it using Script button from top.
7- For security point of view I have created it using Login's security context.

Step4: Accessing Tables and other data

Once Linked server created. You can view and query to your tables and view.

While trying an instance using FoxPro provider it generate an error.

Cannot create an instance of OLE DB Provider VFPOLEDB object

And here is the answer for that.

Server Objects > Linked Servers > Providers > VFPOLEDB > General tab > Provider options > Allow inprocess. 
also, you can change the InProcess setting with the following code:
USE [master]
EXEC master.dbo.sp_MSset_oledb_prop N'VFPOLEDB', N'AllowInProcess', 1

 Thanks to this discussion in following thread