Category Archives: T-SQL

Sunday T-SQL Tip: Context_Info

One of the interesting but not widely known features in SQL Server is Context_Info. This is basically connection/session specific 128 byte storage.

You can set it with: “Set Context_Info” set option. As for getting the value – there are a few methods available
1. Context_Info() function
2. context_info column in sys.dm_exec_requests and sys.sysprocesses views. Those value updates immediately within the batch
3. context_info column in sys.dm_exec_sessions view. This value is not updates until end of the patch

Let’s see that in action:


What can you do with that? Funny things, like recursion, for example:

As for the “more” practical example – we have the 3rd party system we have to integrate with our database and applications. The integration is done on triggers – when table is updated, trigger modifies 3rd party database and vise verse. In a few specific cases we should not propagate the changes – in such case context_info is ideal. We set this value before the update and check this in the trigger. If flag is set, update is not running.

Sunday T-SQL tip: Temporary table variables and transaction context

Recently I’ve been asked if it’s possible to partially commit the transaction. Interestingly enough – the answer is not simple. SQL Server does not support nested transactions. Although there are a few tricks you can use.

First is savepoints. This will allow you to save transaction state at some point, rollback to that point and commit everything you did prior that point. Obviously it will help you to commit only first (beginning) part of the transaction.

Let’s see it in action. Let’s create the small table.

Next, let see savepoints in action.


Good. Now assumming we would like to do opposite task – commit the “tail’ actions after specific point. Unfortunately you cannot do it with regular T-SQL statements. Fortunately there is the trick – temporary table variables. They are not supporting transaction context. Let’s see it in action:

As you can see, you can create temporary table variable, populate it with data, rollback the transaction and next save data in the separate transaction. It worth to mention that regular temporary tables do not work that way – they fully support transaction context.

One of the practical usage of this scenario is audit on rollback. You can save required data into temp table variable, rollback data modification attempt and write audit information to the audit table.

P.S. Just wonder if anybody reads it 🙂

Sunday T-SQL tip: How to select data with unknown parameter set

Please check this SP. Does it look familiar?

Easy to implement. Easy to support. Only minor problem – it never uses indexes. Never. Period.

Let’s check that with simplified example

For us, who came from imperative development world, it does not look right. How come that SQL Server cannot do the simple optimization and determine, that parameter is not null. Well, it does not. It does not know this during the compilation stage so it would scan entire table and apply both predicates to every row. Even if first predicate needs to be checked only once.

So how to deal with that? The most optimal way is to specify the list of all parameter combination. Something like that.

Every select would be optimized. Brute force approach will require 2^N selects where N is the number of parameters, which is introducing some limitations with N > 1..3. In real life it could be slightly better because if you can force index seek with one predicate, you should not really worry about others. So it could be done in the next way:

Still requires quite a few different selects and hard to maintain.

Another solution is dynamic SQL:

It would lead to recompilations of dynamic SQL for every SP call. Well, this is the downside of that method. Fortunately there is the good chance that plan would be in the cache especially if SP is called often enough so compilation would not be expensive.

Sunday T-SQL tip: Cross joins – when Cartesian product helps

Last Sunday I wrote about cross joins. Today I want to show another example where cross joins could be useful.

Assuming you have 2 tables in the database. One is Units. Another 1 is UnitType1Settings which contains set of the settings for units with UnitType=1. See below:

As the disclaimer, this design is far from optimal. Please don’t use it in your system. Even if I’m using it only as the example, I saw similar approaches in real production systems.

In any case, there were about 20 settings per unit, so developers implemented the trigger like below:

Obviously this is not the best implementation. Cursor, 20 separate inserts. Not even talking about the triggers in general.

Let’s test it with the insert statement which inserts 20 units and 10 of them are UnitType=1

Original trigger produces 30 (and in real life 200) separate inserts with plan like that:

So how can you optimize that (assuming you want to keep trigger in place). So first step is get rid of the cursor and do something like:

If you look at the plan, you can see:

Better, but still requires multiple (20 scans) of the inserted table. And this is where cross join can help:

This implementation basically builds rowset with 3 (20) settings and next simply do the Cartesian product with all units with UnitType = 1

As you can see in this plan it uses only 1 scan which is much more efficient than previous implementations

What is the optimal size for variable width columns?

Yesterday we had the discussion during Tampa Bay SQL Server User Group meeting regarding the size of variable width columns. The question was if it matters how to define such columns – for example if column holds 100 characters should it be defined as varchar(100), varchar(1000) or varchar(max).

The answer is not so simple. From the storage prospective it does not really matter. Row stores actual data size + 2 bytes in offset array. The question though is how it affects performance.

When Query Optimizer generates the query plan, it needs to estimate how much memory needed for the query execution. It estimates the number of rows returned by each specific iterator (the valid statistics is the key) as well as the row size. And this is where it comes to play. SQL Server does not know if varchar(100) column holds in average 1 character or 100 characters. So it uses simple assumption – 50% of declared column size and 4000 bytes for (max) columns.

Correct estimate is especially critical for sort and hash operations – if operation cannot be done in memory, Sql Server flushes the data to tempdb and do the operations there. And this is the major performance hit. Let’s see this in action.

Let’s create the table with 2 varchar fields – 100 and 210 characters and populate this table with 1M rows. Let’s put 100 characters into both varchar fields.
create table dbo.Varchars
(
ID int not null,
Field1 varchar(100) not null,
Field2 varchar(210) not null,
primary key (ID)
)
go

declare
@I int = 0
begin tran
while @I < 1000000
begin
insert into dbo.Varchars(ID, Field1, Field2)
values(@I,REPLICATE(CHAR(65 + @I % 26),100),REPLICATE(CHAR(65 + @I % 26),100))
select @I += 1
end
commit

Now let’s run a few tests. Let’s run SQL Profiler and trace “Sort Warning” event. That event basically tells us when data is flushed to TempDb for the sorting purposes. Let’s run the simple query:
declare @A varchar(1000)
select @A = Field1
from dbo.Varchars
where ID < 42000
order by Field1

Let’s look at the Query Plan – as you see row size estimated incorrectly.

We also can see Sort Warning in SQL Profiler

Finally – we can see that query uses 6656Kb

Let’s run the second query:
declare @A varchar(1000)
select @A = Field2
from dbo.Varchars
where ID < 42000
order by Field2

Same query plan but different row size.

No Sort Warnings in SQL Profiler. As for the memory, this query uses 8088K.

Finally let’s run it 100 times in the loop and compare the execution statistics:

As you can see the first query is 25% slower than the second one. This is quite simple example – with complex queries the difference could be dramatic.

So on one hand it looks like that it’s good idea to always define variable width columns really wide. On other hand it introduces the different problem – you don’t want to overestimate row size. It will lead to the bigger memory consumption as well as waits for the large memory grants could be longer than to the small ones. I would suggest to define those columns in about 2 times bigger than actual average data size, so row size estimate would be as accurate as possible.

Sunday T-SQL tip: Cross joins and empty rowsets

As we know, cross joins without WHERE clause produce the Cartesian product of the tables. So, for example, if first table has 5 rows and second table has 3 rows, result set will have 15 rows with all possible combination of the values from the both tables.

Quite limited real-life usage though – I believe I used it maybe 2 times during last 5 years. First time it was the task to combine account specific configuration settings and system configuration settings into one row. Something like:


select
a.ConfigValue1
,a.ConfigValue2
,c.GlobalConfigValue1
,c.GlobalConfigValue2
from
dbo.Accounts a cross join dbo.Config c

Second time it was similar task – combined information from master record and list of detail record IDS in XML format. Could be easily accomplished with the function of course.


;with Details(UserList)
as
(
select UserId
from dbo.Users
where AccountId = @AccountID
for XML raw('User'), Root('Users')
)
select a.AccountId, a.AccountNumber, d.UserList
from dbo.Accounts a cross join Details d
where a.AccountId = @AccountID

 

Only problem with this approach – If one of the rowsets in the join is empty, it behaves as inner join and would not return any rows. This is absolutely correct behavior if you think about it, although you typically don’t expect it to work like that when you write it. You can see it with the following example:


;with CTE1(field1)
as
(
select 'field1' where 1 = 1
)
,CTE2(field2)
as
(
select 'field2' where 1 = 0
)
select CTE1.Field1, CTE2.Field2
from CTE1 cross join CTE2

So if you want to handle it properly, you need to change it from cross-join to full outer join with always true predicate. ;with CTE1(field1)
as
(
select 'field1' where 1 = 1
)
,CTE2(field2)
as
(
select 'field2' where 1 = 0
)
select CTE1.Field1, CTE2.Field2
from CTE1 full outer join CTE2 on
1 = 1

It will do the trick.

Sunday T-SQL tip: How to save large data packet in SQL Server 2005

Last time we saw that TVP is the clear winner in compare with separate insert calls. Obviously this is not an option with SQL 2005. So let’s see what can we do there.

 We’re trying to accomplish 2 things:

  1. Make save process faster
  2. Minimize transaction duration

XQuery

Let’s start with element-centric xml. Let’s create the following procedure:

Element-centric insert

Now let’s run our tests again. For 5,000 records avg. time is about 8 seconds. For 50,000 records avg time is about 82-84 seconds. Quite surprising results. Let’s take a look at the execution plan.

As you can see, SQL Server basically does the join for every element/column in the XML packet. Obviously – more elements/columns you have – more joins it would produce and slower performance would be.

 Let’s check attribute-centric XML. Procedure would look almost the same:

 

 For 5,000 records avg. time is about 2,7 seconds. For 50,000 records avg time is about 27-28 seconds. Better, but still slower than classical inserts. Plan also looks slightly better, but still, it uses the same approach with joins.

 

So as we can see, XQUERY performance greatly depends on the number of columns/joins.

OpenXML

 Well. Only other option left with XML is OpenXML. Let’s modify our procedure and run our tests

 

For 5,000 records avg. time is about 1,4 seconds. For 50,000 records avg time is about 14-15 seconds. As you can see, results are much better – even better than classical approach. Only minor problem with this approach – memory. MSXML parser uses one-eights of the total memory available for SQL Server. Huh?

Temporary table approach

So looks like we’re out of luck. Only option which is faster than classical inserts is OpenXML and we cannot use it unless memory is completely not an issue.

Although let’s think what can be done if our main problem is not the performance but concurrency. With classical approach and multiple inserts in the one transaction, we will place first exclusive lock on the new row with the first insert. In some cases we cannot simply afford to keep locks on the rows for a few seconds.

If this is the main issue, we can insert data to the temporary table first and just move it to the main table in one short transaction. If we update our attribute-centric insert stored procedure and insert data to the temporary table variable first, it would increase execution time for 200-300 milliseconds (for 5000 records) although transaction duration would be just 30-40 milliseconds. I’m not posting the code here but it’s available for download.

If we have a lot of columns and xquery is not an option, we can use temporary table created on the connection level. Temporary tables are available through the execution stack – so if you create temporary table on connection level (After SqlConnection.Open() call, for example), you should have access to the table from any stored procedure called in the same connection context. As result, implementation could include 2 steps:

  1. Save data to the temporary table with separate insert statements
  2. Move data from the temporary to permanent tables in one transaction

Script and sources are available for download – “Refactoring for Performance” presentation from SQL Saturday #62

Sunday T-SQL tip: A few words about Table-Valued Parameters (TVP)

During my presentation on Sql Saturday #40 event I mentioned that one of the biggest improvements in SQL Server 2008 from development standpoint is table-valued parameters support (TVP). It’s not only simplifying development – think about passing temporary table variable as parameter from within t-sql code as well as from the client applications. It also gives great performance improvement especially if you need to save large data packet. Think about data processing tasks where client application loads large record set, processes it and saves it back in one transaction.

Let’s do some tests. Let’s create the table:

Table creationg statement

First, let’s use classical  approach – use SqlCommand and insert records in the loop

Classical approach

Now let’s use TVP. First of all we need to create a type:

TVP

Next step is creating stored procedure which we will use to insert new records

Insert procedure

Next, let’s modify the code to use TVP and stored procedure

TVP approach

It worth to mention that client code needs to create and populate DataTable object to assign it to the parameter. Obviously it requires some time, on other hand, this is client code – transaction has not been started yet.

Finally, let’s run a few tests:

For the package of 5000 records, avg execution time of the classical method  on my laptop is in around 2,600-2,700 milliseconds. TVP approach takes only about 180 milliseconds where 60 milliseconds is DataTable creation. So actual transaction time takes in around 130 milliseconds only – in about 20 times faster than classical method.

For the package of 50,000 records, avg execution times respectively are 20,500 ms and 1,300 ms with 300 ms as DataTable creation.

As you can see, it’s not only the speed, the transaction duration is about 20 times shorter. Huge difference.

Obviously, if you’re on SQL Server 2005, TVP is not an option. Although there are a couple methods which could help you to improve the classical method. I will show it to you in a few days.