Working with SQL Server Database on Microsoft Azure

One of the trickiest implementation for any organization is the database. They require dedicated server and someone to manage it. Cloud services such as Microsoft Azure give organizations opportunity to store their data on its servers and even manages it for them. It allows the companies to focus on their business instead of worrying about data storage and its management. The topic of discussion in this blog post goes beyond the generic benefits of cloud database management and focuses on database implementation on MS Azure. In fact, it is going to be a two part series highlighting the use cases of MS Azure SQL Database. First up, we learn how to implement SQL Server Database on MS Azure.

Microsoft Azure SQL Database

Microsoft Azure SQL Database is an intelligent, fully managed relational cloud database service, providing broad SQL Server engine compatibility that allows migration of SQL Server databases without changing the applications. It includes built-in intelligence that learns app patterns and adapts to maximize performance, reliability and data protection.

Getting Started

The very first step before you can use any Microsoft Azure service is to obtain a Microsoft account, if you don’t already have one. The email account should end with either ‘’ or ‘’. We have signed up using and further completed the Microsoft Azure subscription process. Once you complete the subscription process and login, you’ll see the dashboard as shown in Figure 1.

Note: You can also log into Microsoft Azure without any subscription, by simply creating an account, but you won’t get premium access including creating SQL database.

Figure 1 –


Creating SQL Database in MS Azure

Before creating a SQL database in MS Azure, you need to create a resource group as shown in Figure 2. Specify ‘Resource group name’ and ‘Resource group location’ where your server and resources will reside. It is recommended to choose the closest region, for best performance. We have selected ‘Central India’ in ‘Resource group location’. Resource group name will be used while creating SQL database.

Further, locate any Microsoft Azure cloud website and service in the same region as the SQL Database servers. This will help avoid the bandwidth-based fee charges when cloud sites, services, and databases communicate across different Azure regions, and also reduce latency.

Figure 2 –


After naming and allocating the resources, click SQL databases option on the left to create a SQL database. You’ll see the page, as shown in Figure 3.

Figure 3 –


On that screen, click the ‘Create SQL databases’ button. It will open the screen as shown in Figure 4, where you can create a database.

Figure 4 –


You can also create a server to host your SQL database. This server can host multiple databases. All you need to do is create an administrator account, fill in the details, and click on ‘Create’ button. Figure 5 and 6 depict these workings.

Figure 5 –


Figure 6 –


We have created a database with the name ‘CustomDB’.

Figure 7 –


You can further use the ‘Query Editor’ to connect to the database and create the required tables and other database objects, just like we have done as illustrated in figure 8.

Figure 8 –


With that we complete our first installment of working with SQL Server Database on MS Azure, wherein we simply registered with MS Azure, learned how to create databases, tables, and  other database objects in it. In our next installment we’ll illustrate how you can migrate database to MS Azure.

Categories: MS Azure Tags:

SQL Server Execution Plan

In database environment certain questions related to performance pops up repeatedly: why my query is running slowly, why is SQL Server not using the index? As a DBA first response will be to look at the execution plan. This blog is about understanding SQL Server Execution Plan.

What is Execution Plan?

An Execution Plan tells what’s going on in the background when SQL Server executes a query. It shows how the query optimizer joined the data from the various tables defined in the query, which indexes it used, if any, how it performed any aggregations or sorting and much more. It also estimates the cost of all of these operations in terms of the relative load placed on the system.

What Happens When a Query is submitted?

When we submit a query to SQL Server, a number of processes on the server go to work on that query. The purpose of all these processes is to manage the system such that it will read and write the data.

These processes comes to action every time a query is submitted. While there are many different actions occurring simultaneously within SQL Server, we’re here focusing on the processes around queries. The processes for meeting the requirements of queries break down into two stages:

  1. Processes that occur in the relational engine.
  2. Processes that occur in the storage engine.

Estimated and Actual Execution Plans

There are two types of execution plan.

One is Estimated Execution Plan that represents the output from the optimizer. The operators or steps within the plan are logical steps, because they illustrates the optimizer’s view of the plan and don’t represent what physically occurs when the query runs. Since estimated plans never access data, they are very useful for large and complex queries that has high execution time.

Other is Actual Execution Plan that represents the output from the actual query execution. It shows data representing what actually happened when the query executed. Actual execution plans are preferred because they show important execution statistics such as the number of rows accessed by a specified operator.

  • Generating Estimated Execution Plan

We’ll start by viewing the graphical estimated execution plan that the query optimizer generated. To get estimated execution plan, select the query and click Display Estimated Execution Plan icon as shown below. We can also achieve this by pressing ctrl + L.


Fig 1


  • Generating Actual Execution Plan

To generate actual execution plan, select the query, click Include Actual Execution Plan icon as shown below and then execute the query. We can also enable Include Actual Execution Plan by hitting ctrl + M. Output will include the result of the query with execution plan.

Fig 2.1

Fig 2.1


Fig 2.2

Fig 2.2


Understanding graphical execution plans

The icons in fig 1 and fig 2.1 represents two different operators. On the left side of the execution plan is the Select operator and on the right side of the plan is Table Scan operator. Table Scan is one of the operators to look for when we try to troubleshoot performance issues. So, it is clear from the figures above that we read the plans from the left to the right. In the example above, the logical order is the definition of the SELECT criteria followed by the Scan operator.

The arrow represents the data passed between the operators, as represented by the icons. The thickness of the arrow reflects the amount of data passed and a thicker arrow means more rows. This is another visual clue as to where performance issues may lie. We can hover with the mouse pointer over the arrow and it will show the number of rows as illustrated in fig 3.

Fig 3

Fig 3


Each of the icons and arrows has a pop-up window called a ToolTip which we can access by hovering our mouse pointer over the icon. Using the query shown in above figures, we pull up the estimated execution plan. Hover over the Select operator and we can see the ToolTip window as shown in fig 4 below.

Fig 4

Fig 4


What does the ToolTip window indicates?

  • Cached plan size – How much memory the plan generated by this query will take up in the plan cache.
  • Degree of Parallelism – Whether this plan used multiple processors. This plan uses a single processor as shown by the value of 1.
  • Estimated Operator Cost – The cost of operation presented as percentage of total batch cost.
  • Estimated Subtree Cost – It represents the estimated cost that the optimizer thinks this operator will take.
  • Estimated Number of Rows – It is the number of rows produced by the operator.


ToolTip information for next operator in the execution plan, the Table Scan is shown in fig 5.

Fig 5

Fig 5


The logical operators are the results of the optimizer’s calculations for what should happen when the query executes. The physical operators represent what actually occurred.

Estimated I/O Cost and Estimated CPU Cost is the estimated cost of all I/O activity for the operation and the estimated cost of all CPU activity for the operation respectively.

Execution Plan basically tells us how SQL Server may execute a query or how it executed a query, to help in identifying poorly performing query.

Reset DMV counters

Below two DMVs can be reset without restarting a SQL Server instance.


Use below DBCC queries to do this.

DBCC SQLPERF(“sys.dm_os_latch_stats, CLEAR)

DBCC SQLPERF(“sys.dm_os_wait_stats, CLEAR)

Categories: T-SQL

Identity Value jump in SQL Server 2012

Few days back one developer came to me and told that he observed an increment in IDENTITY value by 1000 in one of the columns. Initially it looked unusual. Later we found with every restart of SQL Service the seed value of IDENTITY columns of some tables either gets increased by 1000 (for Numeric datatype) or by 10000 (for BIGINT datatype). We observed this both in stand alone servers and Cluster as well.

This looked strange and could have a potential impact in production. We can fix this in two ways.

1. Add -T272 trace flag in startup parameters in Configuration Manager and restarted SQL Services. Follow below steps to do this.

  • Open SQL Server Configuration Manager
  • Right-click on your SQL Server instance on the right pane and click ‘Properties’
  • Click ‘Startup Parameters’
  • On the ‘specify a startup parameter’ textbox type ‘-T272’ and click ‘Add’
  • Apply the change
  • Restart your SQL Services

This trace flag actually causes a log record to be generated for each generated identity value. The performance of identity generation may be impacted by turning on this trace flag.

2. You can also use sequence generator with the NO CACHE but it will require code changes in current database which might not be preferrable. If a new database is to be created, this option can certainly be used.

Latest service packs for SQL Server 2008 and 2008 r2

December 25, 2014 Leave a comment

SQL Server 2008 Service Pack 4

Download SQL Server 2008 SP4 from Microsoft site below:

SQL Server 2008 R2 Service Pack 3

Download SQL Server 2008 r2 SP3 from Microsoft site below:

Categories: Implementation Tags: ,

WMI Provider Error when opening SQL Server Configuration Manager

September 10, 2013 Leave a comment

Most of us would have received below error when opening SQL Server Configuration Manager.

Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager. Invalid class [0x8004100]

This occurs mostly when WMI provider is removed after uninstalling an SQL Server instance . It is better explained in the MS KB article

To troubleshoot this error, search the file sqlmgmproviderxpsp2up.mof in the path C:\Program Files (x86)\Microsoft SQL Server\110\Shared folder

Then open command prompt as Administrator and run the command as below.

C:\Program Files (x86)\Microsoft SQL Server\110\Shared>mofcomp sqlmgmproviderxpsp2up.mof
Microsoft (R) MOF Compiler Version 6.2.9200.16384
Copyright (c) Microsoft Corp. 1997-2006. All rights reserved.
Parsing MOF file: sqlmgmproviderxpsp2up.mof
MOF file has been successfully parsed
Storing data in the repository…

You can then connect to SQL Server Configuration Manager.

Note: The value of number depends on the version of SQL Server. Here 110, refers to SQL Server 2012.

Proportional Fill Algorithm

September 3, 2013 Leave a comment

SQL Server uses the round-robin mechanism for data allocation amongst datafiles in a filegroup. Means data is allocated starting from datafile 1, then datafile 2 , so on and then again back to datafile 1. Point to be noted is that allocation happens on the basis of datafile usage i.e. the data gets allocated to the datafile having more free space. And, when we add a new datafile in a filegroup,  data allocation happens to this new datafile until it fills upto the level of other older datafiles.

To transfer data from one filegroup to another filegroup, we can rebuild a clustered index using below t-sql.

CREATE CLUSTERED INDEX <IndexName> on <TableName> (<ColumnName>) WITH (DROP_EXISTING=ON) ON <FileGroupName>