Monthly Archives: August 2010

Sunday T-SQL tip: How to select data with unknown parameter set

Please check this SP. Does it look familiar?

Easy to implement. Easy to support. Only minor problem – it never uses indexes. Never. Period.

Let’s check that with simplified example

For us, who came from imperative development world, it does not look right. How come that SQL Server cannot do the simple optimization and determine, that parameter is not null. Well, it does not. It does not know this during the compilation stage so it would scan entire table and apply both predicates to every row. Even if first predicate needs to be checked only once.

So how to deal with that? The most optimal way is to specify the list of all parameter combination. Something like that.

Every select would be optimized. Brute force approach will require 2^N selects where N is the number of parameters, which is introducing some limitations with N > 1..3. In real life it could be slightly better because if you can force index seek with one predicate, you should not really worry about others. So it could be done in the next way:

Still requires quite a few different selects and hard to maintain.

Another solution is dynamic SQL:

It would lead to recompilations of dynamic SQL for every SP call. Well, this is the downside of that method. Fortunately there is the good chance that plan would be in the cache especially if SP is called often enough so compilation would not be expensive.

Why Row Size Matters

We already saw that correct data types could decrease the row size. Why is it important? Well, obviously one of the reasons is the database and backup file sizes. Although this is not the only and frankly not the main reason.

When SQL Server reads or writes data, the data page needs to be in the memory. If data page is not present in the buffer pool, SQL Server needs to read the page from the disk. Alternatively, when data page has been modified, SQL Server needs to write this page to the disk. Physical I/O is one of the most expensive operations. So it leads to the simple conclusion:

Bigger Row Size = Less Rows Per Page = More data pages per table = More I/O operations = Performance degradation

Unfortunately I/O is not only thing affected. RAM and Buffer Pool size are limited. More data pages need to be processed/read from disk, more “old” pages would be flushed away from the Buffer Pool.

As the side note – this is one of the biggest problems with non-optimized queries. Those queries are not only slow by themselves, those are also flushing Buffer Pool with unnecessary data and compromise the system performance in general. There are 2 reliable performance counters you can see in Performance Monitor (perfmon.exe) under “SQL InstanceBuffer Manager” group: Page life expectancy – indicates how long the data page stays in cache (production value should be > 300 seconds) and Buffer cache/hit ratio – indicates % of the cases when requested data page is already in the cache (production value should be >96-97%). If you monitor those counters and see low values there, most likely the problem is non-optimized queries (assuming, of course, that you have enough RAM available for SQL Server).

Let’s forget about Buffer Pool for a minute and just focus on I/O cost. Let’s create 2 tables and populate them with 50,000 records

LargeRow table row size is a little bit more than 2,015 bytes, so it would fit 4 rows per page. SmallRow table row size is just 22 bytes. Let’s see how many data pages every table has:

As we can see, LargeRows table uses 12,500 data pages when SmallRows uses 149.

Now let’s run some tests and see how fast is table scan:

As you can see scan of SmalRows table is about 45 times faster than scan of LargeRow.

Obviously we rarely scan entire table in real life. Although above is true for every partial scans. So always use appropriate data types.

Sunday T-SQL tip: Cross joins – when Cartesian product helps

Last Sunday I wrote about cross joins. Today I want to show another example where cross joins could be useful.

Assuming you have 2 tables in the database. One is Units. Another 1 is UnitType1Settings which contains set of the settings for units with UnitType=1. See below:

As the disclaimer, this design is far from optimal. Please don’t use it in your system. Even if I’m using it only as the example, I saw similar approaches in real production systems.

In any case, there were about 20 settings per unit, so developers implemented the trigger like below:

Obviously this is not the best implementation. Cursor, 20 separate inserts. Not even talking about the triggers in general.

Let’s test it with the insert statement which inserts 20 units and 10 of them are UnitType=1

Original trigger produces 30 (and in real life 200) separate inserts with plan like that:

So how can you optimize that (assuming you want to keep trigger in place). So first step is get rid of the cursor and do something like:

If you look at the plan, you can see:

Better, but still requires multiple (20 scans) of the inserted table. And this is where cross join can help:

This implementation basically builds rowset with 3 (20) settings and next simply do the Cartesian product with all units with UnitType = 1

As you can see in this plan it uses only 1 scan which is much more efficient than previous implementations

What is the optimal size for variable width columns?

Yesterday we had the discussion during Tampa Bay SQL Server User Group meeting regarding the size of variable width columns. The question was if it matters how to define such columns – for example if column holds 100 characters should it be defined as varchar(100), varchar(1000) or varchar(max).

The answer is not so simple. From the storage prospective it does not really matter. Row stores actual data size + 2 bytes in offset array. The question though is how it affects performance.

When Query Optimizer generates the query plan, it needs to estimate how much memory needed for the query execution. It estimates the number of rows returned by each specific iterator (the valid statistics is the key) as well as the row size. And this is where it comes to play. SQL Server does not know if varchar(100) column holds in average 1 character or 100 characters. So it uses simple assumption – 50% of declared column size and 4000 bytes for (max) columns.

Correct estimate is especially critical for sort and hash operations – if operation cannot be done in memory, Sql Server flushes the data to tempdb and do the operations there. And this is the major performance hit. Let’s see this in action.

Let’s create the table with 2 varchar fields – 100 and 210 characters and populate this table with 1M rows. Let’s put 100 characters into both varchar fields.
create table dbo.Varchars
(
ID int not null,
Field1 varchar(100) not null,
Field2 varchar(210) not null,
primary key (ID)
)
go

declare
@I int = 0
begin tran
while @I < 1000000
begin
insert into dbo.Varchars(ID, Field1, Field2)
values(@I,REPLICATE(CHAR(65 + @I % 26),100),REPLICATE(CHAR(65 + @I % 26),100))
select @I += 1
end
commit

Now let’s run a few tests. Let’s run SQL Profiler and trace “Sort Warning” event. That event basically tells us when data is flushed to TempDb for the sorting purposes. Let’s run the simple query:
declare @A varchar(1000)
select @A = Field1
from dbo.Varchars
where ID < 42000
order by Field1

Let’s look at the Query Plan – as you see row size estimated incorrectly.

We also can see Sort Warning in SQL Profiler

Finally – we can see that query uses 6656Kb

Let’s run the second query:
declare @A varchar(1000)
select @A = Field2
from dbo.Varchars
where ID < 42000
order by Field2

Same query plan but different row size.

No Sort Warnings in SQL Profiler. As for the memory, this query uses 8088K.

Finally let’s run it 100 times in the loop and compare the execution statistics:

As you can see the first query is 25% slower than the second one. This is quite simple example – with complex queries the difference could be dramatic.

So on one hand it looks like that it’s good idea to always define variable width columns really wide. On other hand it introduces the different problem – you don’t want to overestimate row size. It will lead to the bigger memory consumption as well as waits for the large memory grants could be longer than to the small ones. I would suggest to define those columns in about 2 times bigger than actual average data size, so row size estimate would be as accurate as possible.

Sunday T-SQL tip: Cross joins and empty rowsets

As we know, cross joins without WHERE clause produce the Cartesian product of the tables. So, for example, if first table has 5 rows and second table has 3 rows, result set will have 15 rows with all possible combination of the values from the both tables.

Quite limited real-life usage though – I believe I used it maybe 2 times during last 5 years. First time it was the task to combine account specific configuration settings and system configuration settings into one row. Something like:


select
a.ConfigValue1
,a.ConfigValue2
,c.GlobalConfigValue1
,c.GlobalConfigValue2
from
dbo.Accounts a cross join dbo.Config c

Second time it was similar task – combined information from master record and list of detail record IDS in XML format. Could be easily accomplished with the function of course.


;with Details(UserList)
as
(
select UserId
from dbo.Users
where AccountId = @AccountID
for XML raw('User'), Root('Users')
)
select a.AccountId, a.AccountNumber, d.UserList
from dbo.Accounts a cross join Details d
where a.AccountId = @AccountID

 

Only problem with this approach – If one of the rowsets in the join is empty, it behaves as inner join and would not return any rows. This is absolutely correct behavior if you think about it, although you typically don’t expect it to work like that when you write it. You can see it with the following example:


;with CTE1(field1)
as
(
select 'field1' where 1 = 1
)
,CTE2(field2)
as
(
select 'field2' where 1 = 0
)
select CTE1.Field1, CTE2.Field2
from CTE1 cross join CTE2

So if you want to handle it properly, you need to change it from cross-join to full outer join with always true predicate. ;with CTE1(field1)
as
(
select 'field1' where 1 = 1
)
,CTE2(field2)
as
(
select 'field2' where 1 = 0
)
select CTE1.Field1, CTE2.Field2
from CTE1 full outer join CTE2 on
1 = 1

It will do the trick.

How SQL Server stores data (Extents, Data Pages, Data Row for In-Row data)

UPDATE (2013/10/16): It is time to refresh the content. I am writing set of posts about SQL Server Storage engine – how it stores data and what happens in database files under the hood. Please check it here.

This is still work in progress and old content is available below.

I know, I spent too much time on the boring talk about “key decisions”, “performance problems” etc. Let’s finally start to talk about practical and useful things. I know, we’re ready to write the first “create table” statement. But before we do that, let’s take a look at how SQL Server stores data. All information below applies to both SQL Server 2005 and SQL Server 2008.

Everything is stored on 8K data pages (8060 bytes are technically available). 8 data pages (64K) combines into an extent. There are 2 types of extents – mixed extent which stores data pages belong to the different objects and uniform extent which stores data pages belong to the one object. First 8 pages for the objects are stored in the mixed extents, after that only uniform extents are used. All space allocation  is done based on extents – 64K blocks regardless of the type (mixed or uniform).

There are a few special data pages types SQL Server is using to track extents allocation. Those pages are basically bitmaps – every bit handles one extent. So one page can cover 64,000 extents or almost 4Gb of data. Let’s take a quick look at them:

GAM – Global Allocation Map – tracks if extent are available for allocation or already in use.
SGAM – Shared Global Allocation Map – tracks if extents are mixed extent and have at least one data page available for use.
IAM – Index Allocation Map – tracks if extents are used by specific table/index.

There is also another special page type PFS – Page Free Space – tracks approximate amount of the free space in the page, as well as a few other things. One PFS page covers 8,088 pages or about 64Mb of data.

Let’s dive into one level deeper and take a look at the structure of the data page.

As you can see, first 96 bytes is the header. After that page contains the set of data rows and ends with offset array. 2 things are worth to mention. First – each row uses 2 extra bytes for offset storage. And second, data on the page is not sorted.

Let’s dive one level deeper and take a look at the classical data row structure for in-row data.

First 2 bytes contain header information. Next 2 bytes store the length of the fixed width data following by the data. Next are 2 bytes for the number of columns. Next, null bitmask (1 byte per 8 nullable columns). It follows by 2 bytes store number of variable width columns, variable width  column offset array (2 bytes per variable column) and variable width data. And finally there is the optional 14 bytes pointer to the version store. This one is used for optimistic isolation levels (snapshot, read committed snapshot), MARS, etc.

So what is important. First of all, fixed width data always uses space even when null. Variable width data uses 2 extra bytes for offset storage for every value. Null values of variable width data are not stored although there are still 2 bytes in the offset array unless null values are last in the row. Sounds confusing? A little bit 🙂

So where does it lead us? Let’s think about the table which accepts some transactional data.
create table dbo.TranData
(
...
TranDate datetime not null,
Amount float not null,
IsApproved int not null,
IsPending int not null,
Created datetime not null constraint DEF_TranData_Created default (getDate())
...
)

5 fields we have in this table require 8 + 8 + 4 + 4 + 8 = 32 bytes. Now let’s think for a minute. Do we really need to store transaction date/time with precision up to 3 milliseconds? Would 1 minute be OK? Same about Created column. Can we use 1 second precision? What about Amount? Can we use smallmoney or maybe decimal(9,3)?

Let’s modify the table a little bit:
create table dbo.TranData
(
...
TranDate smalldatetime not null,
Amount decimal(9,3) not null,
IsApproved bit not null,
IsPending bit not null,
Created datetime2(0) not null constraint DEF_TranData_Created default (getDate())
...
)

Now it requires: 4 + 5 + 1 + 0 (8 bit fields shares 1 byte of storage space) + 6 = 16 bytes. We ended up with 16 bytes of saving. Not much. On other hand, this is about 16K per 1000 rows. Or about 16Mb per 1M rows. And what if your system collects 1M rows per day? It would be ~5.8Gb per year. What if you have 50M rows per day..? Finally it’s not only about the storage size. It greatly affects performance of the system because of the extra IO operations and other things (we will talk about it later).

So always use correct data types. But don’t be cheap – 65,000 customers is a lot when you start the project. In a year from now you will spend hundreds of hours altering your code and replacing CustomerId smallint to int. It is not worth it.

Sunday T-SQL tip: How to save large data packet in SQL Server 2005

Last time we saw that TVP is the clear winner in compare with separate insert calls. Obviously this is not an option with SQL 2005. So let’s see what can we do there.

 We’re trying to accomplish 2 things:

  1. Make save process faster
  2. Minimize transaction duration

XQuery

Let’s start with element-centric xml. Let’s create the following procedure:

Element-centric insert

Now let’s run our tests again. For 5,000 records avg. time is about 8 seconds. For 50,000 records avg time is about 82-84 seconds. Quite surprising results. Let’s take a look at the execution plan.

As you can see, SQL Server basically does the join for every element/column in the XML packet. Obviously – more elements/columns you have – more joins it would produce and slower performance would be.

 Let’s check attribute-centric XML. Procedure would look almost the same:

 

 For 5,000 records avg. time is about 2,7 seconds. For 50,000 records avg time is about 27-28 seconds. Better, but still slower than classical inserts. Plan also looks slightly better, but still, it uses the same approach with joins.

 

So as we can see, XQUERY performance greatly depends on the number of columns/joins.

OpenXML

 Well. Only other option left with XML is OpenXML. Let’s modify our procedure and run our tests

 

For 5,000 records avg. time is about 1,4 seconds. For 50,000 records avg time is about 14-15 seconds. As you can see, results are much better – even better than classical approach. Only minor problem with this approach – memory. MSXML parser uses one-eights of the total memory available for SQL Server. Huh?

Temporary table approach

So looks like we’re out of luck. Only option which is faster than classical inserts is OpenXML and we cannot use it unless memory is completely not an issue.

Although let’s think what can be done if our main problem is not the performance but concurrency. With classical approach and multiple inserts in the one transaction, we will place first exclusive lock on the new row with the first insert. In some cases we cannot simply afford to keep locks on the rows for a few seconds.

If this is the main issue, we can insert data to the temporary table first and just move it to the main table in one short transaction. If we update our attribute-centric insert stored procedure and insert data to the temporary table variable first, it would increase execution time for 200-300 milliseconds (for 5000 records) although transaction duration would be just 30-40 milliseconds. I’m not posting the code here but it’s available for download.

If we have a lot of columns and xquery is not an option, we can use temporary table created on the connection level. Temporary tables are available through the execution stack – so if you create temporary table on connection level (After SqlConnection.Open() call, for example), you should have access to the table from any stored procedure called in the same connection context. As result, implementation could include 2 steps:

  1. Save data to the temporary table with separate insert statements
  2. Move data from the temporary to permanent tables in one transaction

Script and sources are available for download – “Refactoring for Performance” presentation from SQL Saturday #62

Sources of the performance problems

So all key decisions have been made. But before we go any further let’s think what are the common sources of the performance problems.

SQL Server is bad

When you hear that from someone it means one of the two possible things. This person is either completely green or this person is seasoned professional with a lot of experience in specific areas. Yes, there are some areas where SQL Server (or matter of fact any relation database) is not the best choice. For example, let’s assume you want to create a session store for the web portal. There are no critical data involved and if it crashes customers just need to login to the system again. They would probably be upset but in the long run nothing happened. Would you trade relational database features and transaction consistency for the speed in such circumstances?

Oh, forgot to mention – there is the 3rd reason. Oracle DBAs are typically better paid 🙂

Bad hardware

I would say it’s most popular complain I heard speaking with customers. Yes, for sure, there are the cases when the system outgrows the infrastructure. It would be silly to expect single-cpu box with 4Gb of RAM to handle hundreds gigabytes database. On other hand, in most part of the cases server load produces by bad system design and implementation. For example, IO subsystem can be overloaded because of missing indexes and table scans which same time flush buffer cache and cause memory pressure. CPU load can be forced by extensive recompilations and so on, so on, so on.

Yes, you can improve situation upgrading the server but 1st you need to spend ton of money and you will achieve much better results simply optimizing your system.

Same time let I play Devil’s advocate. Sometimes it’s cheaper to upgrade the hardware than paying team of developers for the system redesign. It always depend on the goals

Bad design

This is good one. It’s so broad and unspecific that everything fits there. System is working slow – bad database design. You have concurrency problems – bad database design. You’re not the person who designed the system – granted, this system is badly designed.

In any case, what is bad database design? I’m not sure I can define it. There are some patterns and some practices needs to be avoided. There are some architectural decisions which cannot scale well. But, generally, if people knew what is the “bad design”, it would be already documented, tough in the universities and nobody would do that.

Honestly I would have the same problem if you ask me what is the good design. Fortunately or unfortunately there is no such thing like golden bullet. But as bare minimum you need to invest into it (hello Agile!).

You obviously should not over-architect. If you design the system which will be used by 50 people department in LAN, you don’t need to have a goal to handle the load from all 50,000 corporate employees accessing it by WAN. Unless, of course, you have unlimited time and budget. On other hand you should think that one day department could be 2 times bigger than now. Or even 10 times bigger.

Bad T-SQL Code

As the application developer, you spent entire life telling compiler how to do things and how to achieve the goal. This approach simply does not work with database development. You should tell database server what do you want to achieve instead.

This is much more complicated task than it sounds. You need to shift your mind into this direction. And it takes time.. A lot of time and practice.

Concurrency issues

Locking, blocking, deadlocks.. Those are making your life miserable. System which works perfectly during development become completely unresponsive after deployment. Fortunately it’s not as complicated as it sounds like. In most part of the cases, concurrency issues could be resolved by optimizing the queries and selecting correct locking strategy & transaction isolation level. We will dive into it later.

Sunday T-SQL tip: A few words about Table-Valued Parameters (TVP)

During my presentation on Sql Saturday #40 event I mentioned that one of the biggest improvements in SQL Server 2008 from development standpoint is table-valued parameters support (TVP). It’s not only simplifying development – think about passing temporary table variable as parameter from within t-sql code as well as from the client applications. It also gives great performance improvement especially if you need to save large data packet. Think about data processing tasks where client application loads large record set, processes it and saves it back in one transaction.

Let’s do some tests. Let’s create the table:

Table creationg statement

First, let’s use classical  approach – use SqlCommand and insert records in the loop

Classical approach

Now let’s use TVP. First of all we need to create a type:

TVP

Next step is creating stored procedure which we will use to insert new records

Insert procedure

Next, let’s modify the code to use TVP and stored procedure

TVP approach

It worth to mention that client code needs to create and populate DataTable object to assign it to the parameter. Obviously it requires some time, on other hand, this is client code – transaction has not been started yet.

Finally, let’s run a few tests:

For the package of 5000 records, avg execution time of the classical method  on my laptop is in around 2,600-2,700 milliseconds. TVP approach takes only about 180 milliseconds where 60 milliseconds is DataTable creation. So actual transaction time takes in around 130 milliseconds only – in about 20 times faster than classical method.

For the package of 50,000 records, avg execution times respectively are 20,500 ms and 1,300 ms with 300 ms as DataTable creation.

As you can see, it’s not only the speed, the transaction duration is about 20 times shorter. Huge difference.

Obviously, if you’re on SQL Server 2005, TVP is not an option. Although there are a couple methods which could help you to improve the classical method. I will show it to you in a few days.