Author Archives: Dmitri Korotkevitch

Locking in Microsoft SQL Server (Part 2 – Locks and transaction isolation levels)

Last time we discussed a few major lock types that SQL Server uses. Shared(S), Exclusive(X) and Update(U). Today I’d like to talk about transaction isolation levels and how they affect locking behavior. But first, let’s start with the question: “What is transaction?”

Transaction is complete unit of work. Assuming you transfer money from checking account to saving, system should deduct money from the checking and add it to the saving accounts at once. Even if those are 2 independent operations, you don’t want it to “stop at the middle”, well at least in the case if bank deducts it from the checking first 🙂 If you don’t want to take that risk, you want them to work as one single action.

There is useful acronym – ACID – that describes requirements to transaction:

  • (A) – Atomicity or “all or nothing”. Either all changes are saved or nothing changed.
  • (C) – Consistency. Data remains in consistent stage all the time
  • (I) – Isolation. Other sessions don’t see the changes until transaction is completed. Well, this is not always true and depend on the implementation. We will talk about it in a few minutes
  • (D) – Durability. Transaction should survive and recover from the system failures

There are a few common myths about transactions in SQL Server. Such as:

  • There are no transactions if you call insert/update/delete statements without begin tran/commit statements. Not true. In such case SQL Server starts implicit transaction for every statement. It’s not only violate consistency rules in a lot of cases, it’s also extremely expensive. Try to run 1,000,000 insert statements within explicit transaction and without it and notice the difference in execution time and log file size.
  • There is no transactions for select statements. Not true. SQL Server uses (lighter) transactions with select statements.
  • There is no transactions when you have (NOLOCK) hint. Not true. (NOLOCK) hint downgrades the reader to read uncommitted isolation level but transactions are still in play.

Each transaction starts in specific transaction isolation level. There are 4 “pessimistic” isolation levels: Read uncommitted, read committed, repeatable read and serializable and 2 “optimisitic” isolation levels: Snapshot and read committed snapshot. With pessimistic isolation levels writers always block writers and typically block readers (with exception of read uncommitted isolation level). With optimistic isolation level writers don’t block readers and in snapshot isolation level does not block writers (there will be the conflict if 2 sessions are updating the same row). We will talk about optimistic isolation levels later.

Regardless of isolation level, exclusive lock (data modification) always held till end of transaction. The difference in behavior is how SQL Server handles shared locks. See the table below:


So, as you can see, in read uncommitted mode, shared locks are not acquired – as result, readers (select) statement can read data modified by other uncommitted transactions even when those rows held (X) locks. As result any side effects possible. Obviously it affects (S) lock behavior only. Writers still block each other.

In any other isolation level (S) locks are acquired and session is blocked when it tries to read uncommitted row with (X) lock. In read committed mode (S) locks are acquired and released immediately. In Repeatable read mode, (S) locks are acquired and held till end of transaction. So it prevents other session to modify data once read. Serializable isolation level works similarly to repeatable read with exception that locks are acquired on the range of the rows. It prevents other session to insert other data in-between once data is read.

You can control that locking behavior with “set transaction isolation level” statement – if you want to do it in transaction/statement scope or on the table level with table hints. So it’s possible to have the statement like that:

So you access Table1 in read uncommitted isolation level and Table2 in serializable isolation level.

It’s extremely easy to understand the difference between transaction isolation levels behavior and side effects when you keep locking in mind. Just remember (S) locks behavior and you’re all set.

Next time we will talk why do we have blocking in the system and what should we do to reduce it.

Part 3 – Blocking in the system

Table of content

Locking in Microsoft SQL Server (Part 1 – Lock types)

One of the most challenging issues for developers who don’t live in RDBMS world is how to make the system working seamlessly in multi-user environment. The code which works perfectly in development and QA starts to fall apart when dozens of users access the system. There are timeouts, deadlocks and other issues that developer cannot even reproduce in house. It does not really matter that SQL Server uses row level locking, that transaction isolation level set to read uncommitted – locking, blocking and deadlocking still occurs.

Today I’m going to start the series of the posts about locking in Microsoft SQL Server. I’ll try to explain why blocking and deadlocks occur in the system, how you can troubleshoot related problems and what should you do in order to minimize it. We will cover different transaction isolation levels and see how and why it affects behavior of the system. And talk about quite a few other things.

Update (2018-01-23): Consider to read Part 21: Intro into Transaction Management and Error Handling first

So let’s start with the lock types. What is the lock? In short, this is in-memory structure (64 bytes on 32 bit OS or 128 bytes on 64 bit OS). The structure has the owner, type and resource hash that links it to the resource it protects (row, page, table, file, database, etc). Obviously it’s more complicated and has quite a few other attributes, but for our practical purposes that level of details is enough.

SQL Server has more than 20 different lock types but for now let’s focus on the most important ones.

  • Shared locks (S). Those locks acquired by readers during read operations such as SELECT. I’d like to mention that it happens in most part of the cases but not all the time. There are some cases when readers don’t acquire (S) locks. We will talk about it later.
  • Exclusive locks (X). Those locks acquired by writers during data modification operators such as Insert, Update or Delete. Those locks prevent one object to be modified by the different sessions. Those locks are always acquired and held till end of transaction
  • Update locks (U). Those locks are the mix between shared and exclusive locks. SQL Server uses them with data modification statements while searching for the rows need to be modified. For example, if you issue the statement like: “update MyTable set Column1 = 0 where Column1 is null” SQL Server acquires update lock for every row it processes while searching for Column1 is null. When eligible row found, SQL Server converts (U) lock to (X).
  • Intent locks (IS, IX, IU, etc). Those locks indicate locks on the child objects. For example, if row has (X) lock, it would introduce (IX) locks on page, table and database level. Main purpose of those locks is optimization. This about situation when you need to have exclusive access to the database (i.e. (X) lock on database level). If SQL Server did not have intent locks, it would have to scan all rows in the all objects and see if there are any low level locks acquired.

Obviously the biggest question is lock compatibility. If you open MSDN site you’ll see nice and “easy to understand” matrix with more than 400 cells. But for our practical purpose let’s focus on the smaller version:

So what we need to remember are basically 3 things:

  1. (S) locks are compatible with (S) and (U) locks.
  2. (X) locks are incompatible with any other lock types
  3. (U) locks are compatible with (S) but incompatible with (U)

Simple enough. Next time we will look at transaction isolation levels and see how it affects lock behavior.

Part 2 – Locks and transaction isolation levels

Table of content

Indexed (materialized) views in Microsoft SQL Server

Even if it sounds almost the same as the regular views, indexed views are completely different animals. That type of the views are not only about the abstraction but more about performance. When you create the indexed view, SQL Server “materializes” the data in the view into physical table so instead of doing complex joins, aggregates, etc, it can queries the data from that “materialized” table. Obviously it’s faster and more efficient.

Let’s take a look at that using our favorite Clients and Orders table. Before we begin, I’d like to mention that there are quite a few requirements you have to met when you create the indexed views. And quite a few limitations. You can get more information in MSDN (http://msdn.microsoft.com/en-us/library/ms191432.aspx).

So let’s run the query that return the list of the clients who spends more than 900,00 for the orders together with # of orders.

Now let’s create the indexed view.

Now let’s run the query against this view.

As you can see the situation is dramatically improved. But that’s not all. Now let’s run the original statement in Enterprise edition of SQL Server and see the plan. And this is the magic – even if you don’t reference the view in the select, SQL Server founds that it can use the view for this select.

This is in fact very good optimization technique if you need to deal with 3rd party applications. If vendor does not allow you to change the indexes on the tables, you can create indexed views and SQL Server Enterprise edition will use them automatically. Unfortunately this is not the case with other editions but Enterprise and Developer. Let’s see that:

With the standard edition of SQL Server it does not even use “materialized” data by default. If you want to force SQL to use the view data, you have to use (noexpand) hint.

Obviously, other magic, like using the view indirectly would not work either.

What does it mean for you? First of all, if you expect to support different editions of SQL Server backend, you should keep this behavior and noexpand hint in mind. Obviously optimization technique for 3rd party applications would not work either.

Last thing I’d like to show is performance implications. Let’s insert the new order.

As you can see, it introduces nice performance hit because of the view support. Similar to the indexes – you have to pay the price of view maintenance for the benefit of performance improvements. Is it worth to do in your system? Hard to say especially if you have heavy loaded OLTP system. For Data Warehouse/Reporting/OLAP systems it could greatly benefit you. Another thing to keep in mind – indexed views shine when you use them with the aggregates.

Source code is available for download

Views in Microsoft SQL Server (Part 2 – Schemabinding and things to avoid)

Last week we saw a few potential problems that views can introduce because of the extra IO and joins. Today I’ll show you the biggest “bad practice” with views and the simple way how to avoid it. All of us are lazy and sometimes it really backfires..

Let’s use the same tables (Clients and Orders) we had last week. Let’s add another column to Orders table and create the simplest view.

Now let’s query the view.

Simple, don’t you think? Now let’s add another column to the view and repeat the query.

As you can see, it does not appear in the result set. Now let’s do the opposite action and drop both columns.

As you can see nothing prevents us from doing that. Although, if we run the same select again, we will get the following error.

If we check the catalog views, we can see that Col1 is still there. Unfortunately this is only the part of the problem. You can “recompile” view with sp_refreshview stored procedure but still…

Now let’s do another test. Let’s create another view first and next add and drop columns.

Now, if we run the select again, it would work although results would be completely screwed up. It returns Placeholder data as Amount (dropped column) and new Col1 data under Placeholder column. Can you imagine all wonderful side effects you can have on the client side?

So speaking about that, first rule is never ever use “select *” in the view definition. This is the source of all potential problems. Second, the good practice is to use schemabinding option. With this option SQL Server links the view and underlying schema and does not allow any changes that can break the views.

Source code is available for download

Views in Microsoft SQL Server (Part 1 – “Joins hell”)

I don’t like views. I like layers of abstractions, dedicated tiers in the system but I don’t like views. It seems kind of strange – regular views are basically the abstraction on the metadata layer. Nothing less, nothing more. It gives you some benefits in terms of security, some benefits of the code abstraction. I just think that views introduce more problems than they solves.

Today I’d focus on one specific aspect of the views – I call it “joins hell”. Remember “dll hell” from the good old days? Maybe not too old, by the way.

Let’s create a couple tables and populate it with the data.

Let’s assume we want to have a view that joins both tables and returns client name in addition to orders column.

Simply enough. Natural enough especially if you come from imperative development world. You have the view as the abstraction and you can use it everywhere where you need to access orders. So let’s select the row including client name for one specific order.

Good. As you can see, it introduces 2 CI seek + join. You cannot do any better than that. Now, let’s say, you need to populate the list of the order numbers. So let’s run it again selecting OrderId and OrderNumber only (I’m still filtering out 1 single row but it hardly matters).

Oops – still join with the clients table even if you don’t need any columns from there. Well, if you think about that, it makes sense. If you use inner join in the view, SQL Server needs to make sure that every order has corresponding client and filter out orders without valid ClientId.

So first thing how to workaround it – use outer join instead.

As you can see, in this example, SQL Server does not care, if order does not have corresponding client. It seeks in Orders table only. On other hand, outer join limits the choices optimizer has. It cannot switch between inner and outer record sets when processes nested loops and so on. So technically this is not the best option.

Another method is to add foreign key constraint (don’t forget to add the index on Orders.ClientID column to avoid performance hit). If you run the original select you see, that SQL eliminates the join. By the way, if you decide to run demo scripts – make sure to use regular db. TempDb does not have joins elimination.


Looks like it solves our problems but unfortunately there are some cases when foreign key constraints cannot be created. And there are some cases when SQL Server cannot eliminate the joins even with constraints. Look here (click to open the plan on the different window):

So we don’t know if SQL will be able to eliminate the joins all the times even with all foreign keys in place. What we know for sure, that SQL does not do that if you have multi-column foreign keys. Let’s see that. First let’s create the same Orders and Clients tables with additional Int1 column.

Now let’s create the view and run the same select and see the plan.

As you can see, SQL still does the join. This is one of “design limitations” SQL Server has – no join elimination with multi-column foreign keys.

And if you think about that, this is very simple case. I saw some systems with the views that includes 20+ joins and at the end every select against them introduces a lot of extra IO you don’t need. So I always ask myself – are those minor benefits views give you worth the efforts?

Source code is available for download.

Sunday T-SQL Tip: Server Side paging with Microsoft SQL Server (part II)

Last week we saw how we can optimize server side paging with Microsoft SQL Server. The method works just fine although there is one other standard requirement for such tasks. Typically, when you want to display just one page of data in UI, you also need to provide an ability to navigate through the pages and, as result, you should know total number of the pages.

There are 2 ways how to accomplish it from the coding prospective. Obviously the simplest one is to have another select that counts # of rows and returns it either as the second result set or as output variable. It’s too trivial to show here. Let’s talk about the situation when we need to return this total row count as additional column in the result sets.

Let’s use the same tables from the last week. You can achieve this with the following SQL:

 

Let’s take a look at the plan. As you can see, SQL Server scans index twice. One time for RowCnt CTE. Another time for the paging. It worth to mention that because of the index and sorting order, paging part needs to scan only first 150 rows. Alternatively, if you change the sorting order, you’ll see:

Same full index scan that time.

If you ask me what method is better, I’d say it’s more or less the matter of the personal preferences. There is no visible performance benefits in the either case. Obviously additional column introduces additional network traffic but it also is not the huge amount of data. So technically it would not matter if you move it to the separate select. But there are some cases when you have to support legacy code and need to return RowCnt in the main result set. In such case, second CTE with cross join introduces the elegant solution for such task.

Source code is available for download

Sunday T-SQL Tip: Server Side paging with Microsoft SQL Server

Let’s assume you have UI that allows to see subset of the data based on pages, with sorting by wide set of columns in both ascending/descending orders. This is probably one of the worst tasks for database developers and administrators – quite easy to implement with ROW_NUMBER() function, but very hard to tune.

Let’s take a look. First of all, let’s create the tables and populate it with some data.

Now, let’s try to return rows #100-150 sorted by OrderDate ASC. I’m disabling parallelism to make it a little bit simpler to analyze. So let’s start with the “standard” approach. Unfortunately SQL Server does not allow to use ranking functions in WHERE clause so we need to use either CTE or subselect. Click on the images below to open them in the separate window.

Now let’s take a look at the plan. I’m using SQL Sentry Plan Explorer – great tool – check it out if you are not familiar with it

In order to calculate Row_Number, SQL Server needs to read entire row set and next sort it. Obviously performance is not so great. So let’s think how we can improve that. One of the first things there – we don’t want to do the join on “pre-sorting stage”. So first of all, let’s move join from CTE to the main select:

As you can see, now we need to perform join with Clients table only for 51 rows. So it looks better. Let’s try to dive a little bit deeper into details.

We have 2 major problems. One is obvious (Red circle) – this is clustered index scan. We will talk about it in a minute. Second one (blue circles) is much less obvious. It’s sorting. Sorting is blocking operation – it needs to collect entire input first. So in our case, it’s 100,000 rows and 14,7M of data. Well, first of all, it’s memory and CPU consuming, but more importantly, if statistics is not ideal and SQL Server underestimates # of input rows, there is the good chance that it would produce sort in tempdb which is also major performance hit. How can we improve it? Let’s massage our SQL a little bit:

At the first glance, it introduce plan not as good as the previous one. Although, if you take a look at the data size, you’ll see, that SORT operator requires 8 times less memory. As result, it would be less CPU intensive and will execute faster. At the end, it can give you huge performance improvements in the case, when output row is large.

Now let’s think about IO. Let’s create the index on OrderDate.

Obviously it helps. It removes sort operator – SQL Server just need to scan first 150 rows from the index. The problem is that this index covers only specific case – one column and one ASC sorting order. If you change it to DESC, SQL Server still needs to sort the data.

Well, if you have 10 different columns user can sort data in both ASC/DESC orders, you need to create 20 different indexes to cover all possible cases. It’s not possible, of course. On other hand, if there are just a couple combinations that users typically use, it could make sense to create a couple indexes to cover them.

One other thing worth to mention, instead of using clustered index scan, SQL Server will use non-clustered index scan which is far more efficient as long as sorting column is there. So it could make sense to at least create one index with all columns included to help that select. For example, if you have the system that always filter by client id and filter by other order columns, it could make sense to create index like that:

Next week I’ll show how to add total row count as part of the same statement.

Source code is available for download

Referential Integrity (Part 2: triggers and other methods)

Last week we discussed foreign key constraints as the way to implement referential integrity. Today I’d like to focus on the other implementation approaches.

First of all, let’s try to outline the situations when foreign key constraints would not work for you

  1.  If you need to reference the table involved in partition switch. SQL Server simply does not allow it.
  2. When you have the situation when detail and master data can be inserted/deleted out of order. For example, in my system we have 2 streams of transactional data and one stream is referencing another. Those streams are collected and processed differently so we cannot guarantee that master data row would be inserted prior detail data row. So foreign key constraints are not the options for us.
  3. When additional index on detail (referencing) column is not appropriate. This one is interesting. There are some cases when you don’t want to maintain/support another index on the table. This approach would not work very well if you want to delete detail rows same time with the master rows. If this requirement is not critical, you can purge detail rows once per night even if it forces scan(s) of detail table). In some cases it makes sense.

How to accomplish it? First, you can handle it in the client code. Generally this solution is not really good and could easily become nightmare, especially in the case if  system does not have dedicated data access layer/tier code. With data access layer (especially if it’s done on the database side via stored procedures) it’s not so simple. On one hand it gives you all control possible you don’t have with the triggers. On the other, you need to make sure that there are no code, especially legacy code, that does not use data access layer/tier code. And you also need to be sure that same would be true at the future. Again, in some cases it could make sense. It depends.

Second, obvious method, is using triggers. Frankly I don’t see any benefits of using triggers in compare with actual foreign key constraints in the case, if you have deletion statement in the trigger. Although, something like that can make sense (it uses the same tables created last week):

As you can see, trigger simply inserts list of deleted OrderIds to the queue. Next, you can have sql server job running during off-peak hours that deletes the data from detail table.

That example covers the case with deletion of the master rows. As for detail (referencing) side, there are a couple things you can do. First is the trigger:

Second is using user-defined function and check constraint.

This approach could be better than trigger because it does not fire the validation if OrderId has not been changed.

In any case – to summarize:

  •  Referential integrity is generally good.
    • It makes sure that data is clean
    • It helps optimizer in some cases
    • It helps to detect the errors on the early stages
  • Referential integrity always introduces performance implications regardless of implementation. Although in most part of the systems those implications are minor. If you cannot afford to have referential integrity implemented in your system, always enable it during DEV/QA stages. Be careful and disable it for the performance testing though because foreign key constraints could change the plan
  • Use foreign keys unless you have specific cases described above
  • Do not use referential integrity in the code/data access tier only UNLESS you have absolute control over the database and absolutely sure that nothing would change at the future. And think twice about it.